00:00:00.001 Started by upstream project "autotest-per-patch" build number 126240 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.149 Fetching changes from the remote Git repository 00:00:00.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.197 Using shallow fetch with depth 1 00:00:00.197 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.197 > git --version # timeout=10 00:00:00.228 > git --version # 'git version 2.39.2' 00:00:00.229 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.250 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.250 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.734 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.743 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.754 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.754 > git config core.sparsecheckout # timeout=10 00:00:04.763 > git read-tree -mu HEAD # timeout=10 00:00:04.778 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.798 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.798 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.882 [Pipeline] Start of Pipeline 00:00:04.896 [Pipeline] library 00:00:04.898 Loading library shm_lib@master 00:00:04.898 Library shm_lib@master is cached. Copying from home. 00:00:04.915 [Pipeline] node 00:00:04.921 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.925 [Pipeline] { 00:00:04.933 [Pipeline] catchError 00:00:04.934 [Pipeline] { 00:00:04.944 [Pipeline] wrap 00:00:04.950 [Pipeline] { 00:00:04.956 [Pipeline] stage 00:00:04.958 [Pipeline] { (Prologue) 00:00:04.977 [Pipeline] echo 00:00:04.979 Node: VM-host-SM17 00:00:04.985 [Pipeline] cleanWs 00:00:04.994 [WS-CLEANUP] Deleting project workspace... 00:00:04.994 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.999 [WS-CLEANUP] done 00:00:05.210 [Pipeline] setCustomBuildProperty 00:00:05.291 [Pipeline] httpRequest 00:00:05.312 [Pipeline] echo 00:00:05.314 Sorcerer 10.211.164.101 is alive 00:00:05.322 [Pipeline] httpRequest 00:00:05.325 HttpMethod: GET 00:00:05.326 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.326 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.327 Response Code: HTTP/1.1 200 OK 00:00:05.328 Success: Status code 200 is in the accepted range: 200,404 00:00:05.329 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.351 [Pipeline] sh 00:00:06.629 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.641 [Pipeline] httpRequest 00:00:06.667 [Pipeline] echo 00:00:06.669 Sorcerer 10.211.164.101 is alive 00:00:06.678 [Pipeline] httpRequest 00:00:06.706 HttpMethod: GET 00:00:06.707 URL: http://10.211.164.101/packages/spdk_91f51bb85b72987c3fe5a26dd93f03d462502d97.tar.gz 00:00:06.707 Sending request to url: http://10.211.164.101/packages/spdk_91f51bb85b72987c3fe5a26dd93f03d462502d97.tar.gz 00:00:06.708 Response Code: HTTP/1.1 200 OK 00:00:06.709 Success: Status code 200 is in the accepted range: 200,404 00:00:06.709 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_91f51bb85b72987c3fe5a26dd93f03d462502d97.tar.gz 00:00:27.032 [Pipeline] sh 00:00:27.312 + tar --no-same-owner -xf spdk_91f51bb85b72987c3fe5a26dd93f03d462502d97.tar.gz 00:00:30.599 [Pipeline] sh 00:00:30.879 + git -C spdk log --oneline -n5 00:00:30.879 91f51bb85 nvme: populate socket_id for pcie controllers 00:00:30.879 c9ef451fa nvme: add spdk_nvme_ctrlr_get_socket_id() 00:00:30.879 b26ca8289 event: add enforce_numa app option 00:00:30.879 83c8cffdc env: add enforce_numa environment option 00:00:30.879 804b11b4b env_dpdk: assert that SOCKET_ID_ANY == SPDK_ENV_SOCKET_ID_ANY 00:00:30.897 [Pipeline] writeFile 00:00:30.914 [Pipeline] sh 00:00:31.193 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:31.205 [Pipeline] sh 00:00:31.484 + cat autorun-spdk.conf 00:00:31.484 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.484 SPDK_TEST_NVMF=1 00:00:31.484 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.484 SPDK_TEST_URING=1 00:00:31.484 SPDK_TEST_USDT=1 00:00:31.484 SPDK_RUN_UBSAN=1 00:00:31.484 NET_TYPE=virt 00:00:31.484 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.491 RUN_NIGHTLY=0 00:00:31.495 [Pipeline] } 00:00:31.515 [Pipeline] // stage 00:00:31.536 [Pipeline] stage 00:00:31.539 [Pipeline] { (Run VM) 00:00:31.556 [Pipeline] sh 00:00:31.876 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.876 + echo 'Start stage prepare_nvme.sh' 00:00:31.876 Start stage prepare_nvme.sh 00:00:31.876 + [[ -n 7 ]] 00:00:31.876 + disk_prefix=ex7 00:00:31.876 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:31.876 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:31.876 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:31.876 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.876 ++ SPDK_TEST_NVMF=1 00:00:31.876 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.876 ++ SPDK_TEST_URING=1 00:00:31.876 ++ SPDK_TEST_USDT=1 00:00:31.876 ++ SPDK_RUN_UBSAN=1 00:00:31.876 ++ NET_TYPE=virt 00:00:31.876 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.876 ++ RUN_NIGHTLY=0 00:00:31.876 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:31.876 + nvme_files=() 00:00:31.876 + declare -A nvme_files 00:00:31.876 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.876 + nvme_files['nvme.img']=5G 00:00:31.876 + nvme_files['nvme-cmb.img']=5G 00:00:31.876 + nvme_files['nvme-multi0.img']=4G 00:00:31.876 + nvme_files['nvme-multi1.img']=4G 00:00:31.876 + nvme_files['nvme-multi2.img']=4G 00:00:31.876 + nvme_files['nvme-openstack.img']=8G 00:00:31.876 + nvme_files['nvme-zns.img']=5G 00:00:31.876 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.876 + (( SPDK_TEST_FTL == 1 )) 00:00:31.876 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.876 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.876 + for nvme in "${!nvme_files[@]}" 00:00:31.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:31.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.876 + for nvme in "${!nvme_files[@]}" 00:00:31.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:31.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.876 + for nvme in "${!nvme_files[@]}" 00:00:31.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:31.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.876 + for nvme in "${!nvme_files[@]}" 00:00:31.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:31.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.876 + for nvme in "${!nvme_files[@]}" 00:00:31.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:31.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.876 + for nvme in "${!nvme_files[@]}" 00:00:31.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:31.876 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.876 + for nvme in "${!nvme_files[@]}" 00:00:31.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:32.812 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.812 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:32.812 + echo 'End stage prepare_nvme.sh' 00:00:32.812 End stage prepare_nvme.sh 00:00:32.825 [Pipeline] sh 00:00:33.107 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.107 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:00:33.107 00:00:33.107 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:33.107 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:33.107 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:33.107 HELP=0 00:00:33.107 DRY_RUN=0 00:00:33.107 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:33.107 NVME_DISKS_TYPE=nvme,nvme, 00:00:33.107 NVME_AUTO_CREATE=0 00:00:33.107 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:33.107 NVME_CMB=,, 00:00:33.107 NVME_PMR=,, 00:00:33.107 NVME_ZNS=,, 00:00:33.107 NVME_MS=,, 00:00:33.107 NVME_FDP=,, 00:00:33.107 SPDK_VAGRANT_DISTRO=fedora38 00:00:33.107 SPDK_VAGRANT_VMCPU=10 00:00:33.107 SPDK_VAGRANT_VMRAM=12288 00:00:33.107 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.107 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.107 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.107 SPDK_OPENSTACK_NETWORK=0 00:00:33.107 VAGRANT_PACKAGE_BOX=0 00:00:33.107 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:33.107 FORCE_DISTRO=true 00:00:33.107 VAGRANT_BOX_VERSION= 00:00:33.107 EXTRA_VAGRANTFILES= 00:00:33.107 NIC_MODEL=e1000 00:00:33.107 00:00:33.107 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:33.107 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:36.393 Bringing machine 'default' up with 'libvirt' provider... 00:00:36.961 ==> default: Creating image (snapshot of base box volume). 00:00:36.961 ==> default: Creating domain with the following settings... 00:00:36.961 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721072310_450665ba7a76f4d8db92 00:00:36.961 ==> default: -- Domain type: kvm 00:00:36.961 ==> default: -- Cpus: 10 00:00:36.961 ==> default: -- Feature: acpi 00:00:36.961 ==> default: -- Feature: apic 00:00:36.961 ==> default: -- Feature: pae 00:00:36.961 ==> default: -- Memory: 12288M 00:00:36.961 ==> default: -- Memory Backing: hugepages: 00:00:36.961 ==> default: -- Management MAC: 00:00:36.961 ==> default: -- Loader: 00:00:36.961 ==> default: -- Nvram: 00:00:36.961 ==> default: -- Base box: spdk/fedora38 00:00:36.961 ==> default: -- Storage pool: default 00:00:36.961 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721072310_450665ba7a76f4d8db92.img (20G) 00:00:36.961 ==> default: -- Volume Cache: default 00:00:36.961 ==> default: -- Kernel: 00:00:36.961 ==> default: -- Initrd: 00:00:36.961 ==> default: -- Graphics Type: vnc 00:00:36.961 ==> default: -- Graphics Port: -1 00:00:36.961 ==> default: -- Graphics IP: 127.0.0.1 00:00:36.961 ==> default: -- Graphics Password: Not defined 00:00:36.961 ==> default: -- Video Type: cirrus 00:00:36.961 ==> default: -- Video VRAM: 9216 00:00:36.961 ==> default: -- Sound Type: 00:00:36.961 ==> default: -- Keymap: en-us 00:00:36.961 ==> default: -- TPM Path: 00:00:36.961 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:36.961 ==> default: -- Command line args: 00:00:36.961 ==> default: -> value=-device, 00:00:36.961 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:36.961 ==> default: -> value=-drive, 00:00:36.961 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:36.961 ==> default: -> value=-device, 00:00:36.961 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.961 ==> default: -> value=-device, 00:00:36.961 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:36.961 ==> default: -> value=-drive, 00:00:36.961 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:36.961 ==> default: -> value=-device, 00:00:36.961 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.961 ==> default: -> value=-drive, 00:00:36.961 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:36.961 ==> default: -> value=-device, 00:00:36.961 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.961 ==> default: -> value=-drive, 00:00:36.961 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:36.961 ==> default: -> value=-device, 00:00:36.961 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.220 ==> default: Creating shared folders metadata... 00:00:37.220 ==> default: Starting domain. 00:00:39.126 ==> default: Waiting for domain to get an IP address... 00:00:54.018 ==> default: Waiting for SSH to become available... 00:00:54.955 ==> default: Configuring and enabling network interfaces... 00:00:59.144 default: SSH address: 192.168.121.32:22 00:00:59.144 default: SSH username: vagrant 00:00:59.144 default: SSH auth method: private key 00:01:01.042 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.253 ==> default: Mounting SSHFS shared folder... 00:01:09.819 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:09.819 ==> default: Checking Mount.. 00:01:11.191 ==> default: Folder Successfully Mounted! 00:01:11.191 ==> default: Running provisioner: file... 00:01:11.789 default: ~/.gitconfig => .gitconfig 00:01:12.356 00:01:12.356 SUCCESS! 00:01:12.356 00:01:12.356 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:12.356 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:12.356 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:12.356 00:01:12.365 [Pipeline] } 00:01:12.386 [Pipeline] // stage 00:01:12.397 [Pipeline] dir 00:01:12.398 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:12.400 [Pipeline] { 00:01:12.414 [Pipeline] catchError 00:01:12.416 [Pipeline] { 00:01:12.433 [Pipeline] sh 00:01:12.716 + vagrant ssh-config --host vagrant 00:01:12.716 + sed -ne /^Host/,$p 00:01:12.716 + tee ssh_conf 00:01:16.902 Host vagrant 00:01:16.902 HostName 192.168.121.32 00:01:16.902 User vagrant 00:01:16.902 Port 22 00:01:16.902 UserKnownHostsFile /dev/null 00:01:16.902 StrictHostKeyChecking no 00:01:16.902 PasswordAuthentication no 00:01:16.902 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:16.902 IdentitiesOnly yes 00:01:16.902 LogLevel FATAL 00:01:16.902 ForwardAgent yes 00:01:16.902 ForwardX11 yes 00:01:16.902 00:01:16.915 [Pipeline] withEnv 00:01:16.917 [Pipeline] { 00:01:16.930 [Pipeline] sh 00:01:17.204 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.204 source /etc/os-release 00:01:17.204 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.204 # Minimal, systemd-like check. 00:01:17.204 if [[ -e /.dockerenv ]]; then 00:01:17.204 # Clear garbage from the node's name: 00:01:17.204 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.204 # $HOSTNAME is the actual container id 00:01:17.204 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.204 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.204 # We can assume this is a mount from a host where container is running, 00:01:17.204 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.204 container="$(< /etc/hostname) ($agent)" 00:01:17.204 else 00:01:17.204 # Fallback 00:01:17.204 container=$agent 00:01:17.204 fi 00:01:17.204 fi 00:01:17.204 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.204 00:01:17.215 [Pipeline] } 00:01:17.235 [Pipeline] // withEnv 00:01:17.243 [Pipeline] setCustomBuildProperty 00:01:17.258 [Pipeline] stage 00:01:17.261 [Pipeline] { (Tests) 00:01:17.281 [Pipeline] sh 00:01:17.559 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:17.833 [Pipeline] sh 00:01:18.113 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:18.129 [Pipeline] timeout 00:01:18.130 Timeout set to expire in 30 min 00:01:18.132 [Pipeline] { 00:01:18.151 [Pipeline] sh 00:01:18.432 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:19.038 HEAD is now at 91f51bb85 nvme: populate socket_id for pcie controllers 00:01:19.051 [Pipeline] sh 00:01:19.329 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:19.619 [Pipeline] sh 00:01:19.898 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:19.914 [Pipeline] sh 00:01:20.192 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:20.193 ++ readlink -f spdk_repo 00:01:20.193 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:20.193 + [[ -n /home/vagrant/spdk_repo ]] 00:01:20.193 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:20.193 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:20.193 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:20.193 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:20.193 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:20.193 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:20.193 + cd /home/vagrant/spdk_repo 00:01:20.193 + source /etc/os-release 00:01:20.193 ++ NAME='Fedora Linux' 00:01:20.193 ++ VERSION='38 (Cloud Edition)' 00:01:20.193 ++ ID=fedora 00:01:20.193 ++ VERSION_ID=38 00:01:20.193 ++ VERSION_CODENAME= 00:01:20.193 ++ PLATFORM_ID=platform:f38 00:01:20.193 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:20.193 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.193 ++ LOGO=fedora-logo-icon 00:01:20.193 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:20.193 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.193 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:20.193 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.193 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.193 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.193 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:20.193 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.193 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:20.193 ++ SUPPORT_END=2024-05-14 00:01:20.193 ++ VARIANT='Cloud Edition' 00:01:20.193 ++ VARIANT_ID=cloud 00:01:20.193 + uname -a 00:01:20.193 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:20.193 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:20.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:20.760 Hugepages 00:01:20.760 node hugesize free / total 00:01:20.760 node0 1048576kB 0 / 0 00:01:20.760 node0 2048kB 0 / 0 00:01:20.760 00:01:20.760 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:20.760 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:20.760 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:20.760 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:20.760 + rm -f /tmp/spdk-ld-path 00:01:20.760 + source autorun-spdk.conf 00:01:20.760 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.760 ++ SPDK_TEST_NVMF=1 00:01:20.760 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.760 ++ SPDK_TEST_URING=1 00:01:20.760 ++ SPDK_TEST_USDT=1 00:01:20.760 ++ SPDK_RUN_UBSAN=1 00:01:20.760 ++ NET_TYPE=virt 00:01:20.760 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.760 ++ RUN_NIGHTLY=0 00:01:20.760 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.760 + [[ -n '' ]] 00:01:20.760 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:21.019 + for M in /var/spdk/build-*-manifest.txt 00:01:21.019 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.019 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.019 + for M in /var/spdk/build-*-manifest.txt 00:01:21.019 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.019 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.019 ++ uname 00:01:21.019 + [[ Linux == \L\i\n\u\x ]] 00:01:21.019 + sudo dmesg -T 00:01:21.019 + sudo dmesg --clear 00:01:21.019 + dmesg_pid=5095 00:01:21.019 + [[ Fedora Linux == FreeBSD ]] 00:01:21.019 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.019 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.019 + sudo dmesg -Tw 00:01:21.019 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.019 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.019 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.019 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.019 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.019 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.019 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.019 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.019 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.019 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.019 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.019 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.019 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.019 Test configuration: 00:01:21.019 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.019 SPDK_TEST_NVMF=1 00:01:21.019 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.019 SPDK_TEST_URING=1 00:01:21.019 SPDK_TEST_USDT=1 00:01:21.019 SPDK_RUN_UBSAN=1 00:01:21.019 NET_TYPE=virt 00:01:21.019 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.019 RUN_NIGHTLY=0 19:39:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:21.019 19:39:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.019 19:39:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.019 19:39:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.019 19:39:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.019 19:39:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.019 19:39:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.019 19:39:15 -- paths/export.sh@5 -- $ export PATH 00:01:21.019 19:39:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.019 19:39:15 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:21.019 19:39:15 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:21.019 19:39:15 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721072355.XXXXXX 00:01:21.019 19:39:15 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721072355.GwDDzL 00:01:21.019 19:39:15 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:21.019 19:39:15 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:21.019 19:39:15 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:21.019 19:39:15 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:21.019 19:39:15 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.019 19:39:15 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:21.019 19:39:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:21.019 19:39:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.019 19:39:15 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:21.019 19:39:15 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:21.019 19:39:15 -- pm/common@17 -- $ local monitor 00:01:21.019 19:39:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.019 19:39:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.019 19:39:15 -- pm/common@25 -- $ sleep 1 00:01:21.019 19:39:15 -- pm/common@21 -- $ date +%s 00:01:21.019 19:39:15 -- pm/common@21 -- $ date +%s 00:01:21.019 19:39:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721072355 00:01:21.019 19:39:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721072355 00:01:21.019 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721072355_collect-vmstat.pm.log 00:01:21.019 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721072355_collect-cpu-load.pm.log 00:01:22.409 19:39:16 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:22.409 19:39:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.409 19:39:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.409 19:39:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:22.409 19:39:16 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.409 Mon Jul 15 07:39:16 PM UTC 2024 00:01:22.409 19:39:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.409 v24.09-pre-231-g91f51bb85 00:01:22.409 19:39:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:22.409 19:39:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.409 19:39:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.409 19:39:16 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:22.409 19:39:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:22.409 19:39:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.409 ************************************ 00:01:22.409 START TEST ubsan 00:01:22.409 ************************************ 00:01:22.409 using ubsan 00:01:22.409 19:39:16 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:22.409 00:01:22.409 real 0m0.000s 00:01:22.409 user 0m0.000s 00:01:22.409 sys 0m0.000s 00:01:22.409 19:39:16 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:22.409 19:39:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.409 ************************************ 00:01:22.409 END TEST ubsan 00:01:22.409 ************************************ 00:01:22.409 19:39:16 -- common/autotest_common.sh@1142 -- $ return 0 00:01:22.409 19:39:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:22.409 19:39:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:22.409 19:39:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:22.409 19:39:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:22.409 19:39:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:22.409 19:39:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:22.409 19:39:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:22.409 19:39:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:22.409 19:39:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:22.409 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:22.410 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:22.668 Using 'verbs' RDMA provider 00:01:38.478 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:50.721 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:50.721 Creating mk/config.mk...done. 00:01:50.721 Creating mk/cc.flags.mk...done. 00:01:50.721 Type 'make' to build. 00:01:50.721 19:39:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:50.721 19:39:43 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:50.721 19:39:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:50.721 19:39:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.721 ************************************ 00:01:50.721 START TEST make 00:01:50.721 ************************************ 00:01:50.721 19:39:43 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:50.721 make[1]: Nothing to be done for 'all'. 00:02:00.695 The Meson build system 00:02:00.695 Version: 1.3.1 00:02:00.695 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:00.695 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:00.695 Build type: native build 00:02:00.695 Program cat found: YES (/usr/bin/cat) 00:02:00.695 Project name: DPDK 00:02:00.695 Project version: 24.03.0 00:02:00.695 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:00.695 C linker for the host machine: cc ld.bfd 2.39-16 00:02:00.695 Host machine cpu family: x86_64 00:02:00.695 Host machine cpu: x86_64 00:02:00.695 Message: ## Building in Developer Mode ## 00:02:00.695 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.695 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.695 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.695 Program python3 found: YES (/usr/bin/python3) 00:02:00.695 Program cat found: YES (/usr/bin/cat) 00:02:00.695 Compiler for C supports arguments -march=native: YES 00:02:00.695 Checking for size of "void *" : 8 00:02:00.695 Checking for size of "void *" : 8 (cached) 00:02:00.695 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:00.695 Library m found: YES 00:02:00.695 Library numa found: YES 00:02:00.695 Has header "numaif.h" : YES 00:02:00.695 Library fdt found: NO 00:02:00.695 Library execinfo found: NO 00:02:00.695 Has header "execinfo.h" : YES 00:02:00.695 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:00.695 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.695 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.695 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.695 Run-time dependency openssl found: YES 3.0.9 00:02:00.695 Run-time dependency libpcap found: YES 1.10.4 00:02:00.695 Has header "pcap.h" with dependency libpcap: YES 00:02:00.695 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.695 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.695 Compiler for C supports arguments -Wformat: YES 00:02:00.695 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.695 Compiler for C supports arguments -Wformat-security: NO 00:02:00.695 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.695 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.695 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.695 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.695 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.695 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.695 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.695 Compiler for C supports arguments -Wundef: YES 00:02:00.695 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.695 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.695 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.695 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.695 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.695 Program objdump found: YES (/usr/bin/objdump) 00:02:00.695 Compiler for C supports arguments -mavx512f: YES 00:02:00.695 Checking if "AVX512 checking" compiles: YES 00:02:00.695 Fetching value of define "__SSE4_2__" : 1 00:02:00.695 Fetching value of define "__AES__" : 1 00:02:00.695 Fetching value of define "__AVX__" : 1 00:02:00.695 Fetching value of define "__AVX2__" : 1 00:02:00.695 Fetching value of define "__AVX512BW__" : (undefined) 00:02:00.695 Fetching value of define "__AVX512CD__" : (undefined) 00:02:00.695 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:00.695 Fetching value of define "__AVX512F__" : (undefined) 00:02:00.695 Fetching value of define "__AVX512VL__" : (undefined) 00:02:00.695 Fetching value of define "__PCLMUL__" : 1 00:02:00.695 Fetching value of define "__RDRND__" : 1 00:02:00.695 Fetching value of define "__RDSEED__" : 1 00:02:00.695 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:00.695 Fetching value of define "__znver1__" : (undefined) 00:02:00.695 Fetching value of define "__znver2__" : (undefined) 00:02:00.695 Fetching value of define "__znver3__" : (undefined) 00:02:00.695 Fetching value of define "__znver4__" : (undefined) 00:02:00.695 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.695 Message: lib/log: Defining dependency "log" 00:02:00.695 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.695 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.695 Checking for function "getentropy" : NO 00:02:00.695 Message: lib/eal: Defining dependency "eal" 00:02:00.695 Message: lib/ring: Defining dependency "ring" 00:02:00.695 Message: lib/rcu: Defining dependency "rcu" 00:02:00.695 Message: lib/mempool: Defining dependency "mempool" 00:02:00.695 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.695 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.695 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.695 Compiler for C supports arguments -mpclmul: YES 00:02:00.695 Compiler for C supports arguments -maes: YES 00:02:00.695 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.695 Compiler for C supports arguments -mavx512bw: YES 00:02:00.695 Compiler for C supports arguments -mavx512dq: YES 00:02:00.695 Compiler for C supports arguments -mavx512vl: YES 00:02:00.695 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.695 Compiler for C supports arguments -mavx2: YES 00:02:00.695 Compiler for C supports arguments -mavx: YES 00:02:00.695 Message: lib/net: Defining dependency "net" 00:02:00.695 Message: lib/meter: Defining dependency "meter" 00:02:00.695 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.695 Message: lib/pci: Defining dependency "pci" 00:02:00.695 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.695 Message: lib/hash: Defining dependency "hash" 00:02:00.695 Message: lib/timer: Defining dependency "timer" 00:02:00.695 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.695 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.695 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.695 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.695 Message: lib/power: Defining dependency "power" 00:02:00.695 Message: lib/reorder: Defining dependency "reorder" 00:02:00.695 Message: lib/security: Defining dependency "security" 00:02:00.695 Has header "linux/userfaultfd.h" : YES 00:02:00.695 Has header "linux/vduse.h" : YES 00:02:00.695 Message: lib/vhost: Defining dependency "vhost" 00:02:00.695 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.695 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.695 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.695 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.695 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.695 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.695 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.695 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.695 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.695 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.695 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.695 Configuring doxy-api-html.conf using configuration 00:02:00.695 Configuring doxy-api-man.conf using configuration 00:02:00.695 Program mandb found: YES (/usr/bin/mandb) 00:02:00.695 Program sphinx-build found: NO 00:02:00.695 Configuring rte_build_config.h using configuration 00:02:00.695 Message: 00:02:00.695 ================= 00:02:00.695 Applications Enabled 00:02:00.695 ================= 00:02:00.695 00:02:00.695 apps: 00:02:00.695 00:02:00.695 00:02:00.695 Message: 00:02:00.695 ================= 00:02:00.695 Libraries Enabled 00:02:00.695 ================= 00:02:00.695 00:02:00.695 libs: 00:02:00.695 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.695 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.695 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.695 00:02:00.695 Message: 00:02:00.695 =============== 00:02:00.695 Drivers Enabled 00:02:00.695 =============== 00:02:00.695 00:02:00.695 common: 00:02:00.695 00:02:00.695 bus: 00:02:00.695 pci, vdev, 00:02:00.695 mempool: 00:02:00.695 ring, 00:02:00.695 dma: 00:02:00.695 00:02:00.695 net: 00:02:00.695 00:02:00.695 crypto: 00:02:00.695 00:02:00.695 compress: 00:02:00.695 00:02:00.695 vdpa: 00:02:00.695 00:02:00.695 00:02:00.695 Message: 00:02:00.695 ================= 00:02:00.695 Content Skipped 00:02:00.695 ================= 00:02:00.695 00:02:00.695 apps: 00:02:00.695 dumpcap: explicitly disabled via build config 00:02:00.695 graph: explicitly disabled via build config 00:02:00.695 pdump: explicitly disabled via build config 00:02:00.695 proc-info: explicitly disabled via build config 00:02:00.695 test-acl: explicitly disabled via build config 00:02:00.695 test-bbdev: explicitly disabled via build config 00:02:00.695 test-cmdline: explicitly disabled via build config 00:02:00.695 test-compress-perf: explicitly disabled via build config 00:02:00.696 test-crypto-perf: explicitly disabled via build config 00:02:00.696 test-dma-perf: explicitly disabled via build config 00:02:00.696 test-eventdev: explicitly disabled via build config 00:02:00.696 test-fib: explicitly disabled via build config 00:02:00.696 test-flow-perf: explicitly disabled via build config 00:02:00.696 test-gpudev: explicitly disabled via build config 00:02:00.696 test-mldev: explicitly disabled via build config 00:02:00.696 test-pipeline: explicitly disabled via build config 00:02:00.696 test-pmd: explicitly disabled via build config 00:02:00.696 test-regex: explicitly disabled via build config 00:02:00.696 test-sad: explicitly disabled via build config 00:02:00.696 test-security-perf: explicitly disabled via build config 00:02:00.696 00:02:00.696 libs: 00:02:00.696 argparse: explicitly disabled via build config 00:02:00.696 metrics: explicitly disabled via build config 00:02:00.696 acl: explicitly disabled via build config 00:02:00.696 bbdev: explicitly disabled via build config 00:02:00.696 bitratestats: explicitly disabled via build config 00:02:00.696 bpf: explicitly disabled via build config 00:02:00.696 cfgfile: explicitly disabled via build config 00:02:00.696 distributor: explicitly disabled via build config 00:02:00.696 efd: explicitly disabled via build config 00:02:00.696 eventdev: explicitly disabled via build config 00:02:00.696 dispatcher: explicitly disabled via build config 00:02:00.696 gpudev: explicitly disabled via build config 00:02:00.696 gro: explicitly disabled via build config 00:02:00.696 gso: explicitly disabled via build config 00:02:00.696 ip_frag: explicitly disabled via build config 00:02:00.696 jobstats: explicitly disabled via build config 00:02:00.696 latencystats: explicitly disabled via build config 00:02:00.696 lpm: explicitly disabled via build config 00:02:00.696 member: explicitly disabled via build config 00:02:00.696 pcapng: explicitly disabled via build config 00:02:00.696 rawdev: explicitly disabled via build config 00:02:00.696 regexdev: explicitly disabled via build config 00:02:00.696 mldev: explicitly disabled via build config 00:02:00.696 rib: explicitly disabled via build config 00:02:00.696 sched: explicitly disabled via build config 00:02:00.696 stack: explicitly disabled via build config 00:02:00.696 ipsec: explicitly disabled via build config 00:02:00.696 pdcp: explicitly disabled via build config 00:02:00.696 fib: explicitly disabled via build config 00:02:00.696 port: explicitly disabled via build config 00:02:00.696 pdump: explicitly disabled via build config 00:02:00.696 table: explicitly disabled via build config 00:02:00.696 pipeline: explicitly disabled via build config 00:02:00.696 graph: explicitly disabled via build config 00:02:00.696 node: explicitly disabled via build config 00:02:00.696 00:02:00.696 drivers: 00:02:00.696 common/cpt: not in enabled drivers build config 00:02:00.696 common/dpaax: not in enabled drivers build config 00:02:00.696 common/iavf: not in enabled drivers build config 00:02:00.696 common/idpf: not in enabled drivers build config 00:02:00.696 common/ionic: not in enabled drivers build config 00:02:00.696 common/mvep: not in enabled drivers build config 00:02:00.696 common/octeontx: not in enabled drivers build config 00:02:00.696 bus/auxiliary: not in enabled drivers build config 00:02:00.696 bus/cdx: not in enabled drivers build config 00:02:00.696 bus/dpaa: not in enabled drivers build config 00:02:00.696 bus/fslmc: not in enabled drivers build config 00:02:00.696 bus/ifpga: not in enabled drivers build config 00:02:00.696 bus/platform: not in enabled drivers build config 00:02:00.696 bus/uacce: not in enabled drivers build config 00:02:00.696 bus/vmbus: not in enabled drivers build config 00:02:00.696 common/cnxk: not in enabled drivers build config 00:02:00.696 common/mlx5: not in enabled drivers build config 00:02:00.696 common/nfp: not in enabled drivers build config 00:02:00.696 common/nitrox: not in enabled drivers build config 00:02:00.696 common/qat: not in enabled drivers build config 00:02:00.696 common/sfc_efx: not in enabled drivers build config 00:02:00.696 mempool/bucket: not in enabled drivers build config 00:02:00.696 mempool/cnxk: not in enabled drivers build config 00:02:00.696 mempool/dpaa: not in enabled drivers build config 00:02:00.696 mempool/dpaa2: not in enabled drivers build config 00:02:00.696 mempool/octeontx: not in enabled drivers build config 00:02:00.696 mempool/stack: not in enabled drivers build config 00:02:00.696 dma/cnxk: not in enabled drivers build config 00:02:00.696 dma/dpaa: not in enabled drivers build config 00:02:00.696 dma/dpaa2: not in enabled drivers build config 00:02:00.696 dma/hisilicon: not in enabled drivers build config 00:02:00.696 dma/idxd: not in enabled drivers build config 00:02:00.696 dma/ioat: not in enabled drivers build config 00:02:00.696 dma/skeleton: not in enabled drivers build config 00:02:00.696 net/af_packet: not in enabled drivers build config 00:02:00.696 net/af_xdp: not in enabled drivers build config 00:02:00.696 net/ark: not in enabled drivers build config 00:02:00.696 net/atlantic: not in enabled drivers build config 00:02:00.696 net/avp: not in enabled drivers build config 00:02:00.696 net/axgbe: not in enabled drivers build config 00:02:00.696 net/bnx2x: not in enabled drivers build config 00:02:00.696 net/bnxt: not in enabled drivers build config 00:02:00.696 net/bonding: not in enabled drivers build config 00:02:00.696 net/cnxk: not in enabled drivers build config 00:02:00.696 net/cpfl: not in enabled drivers build config 00:02:00.696 net/cxgbe: not in enabled drivers build config 00:02:00.696 net/dpaa: not in enabled drivers build config 00:02:00.696 net/dpaa2: not in enabled drivers build config 00:02:00.696 net/e1000: not in enabled drivers build config 00:02:00.696 net/ena: not in enabled drivers build config 00:02:00.696 net/enetc: not in enabled drivers build config 00:02:00.696 net/enetfec: not in enabled drivers build config 00:02:00.696 net/enic: not in enabled drivers build config 00:02:00.696 net/failsafe: not in enabled drivers build config 00:02:00.696 net/fm10k: not in enabled drivers build config 00:02:00.696 net/gve: not in enabled drivers build config 00:02:00.696 net/hinic: not in enabled drivers build config 00:02:00.696 net/hns3: not in enabled drivers build config 00:02:00.696 net/i40e: not in enabled drivers build config 00:02:00.696 net/iavf: not in enabled drivers build config 00:02:00.696 net/ice: not in enabled drivers build config 00:02:00.696 net/idpf: not in enabled drivers build config 00:02:00.696 net/igc: not in enabled drivers build config 00:02:00.696 net/ionic: not in enabled drivers build config 00:02:00.696 net/ipn3ke: not in enabled drivers build config 00:02:00.696 net/ixgbe: not in enabled drivers build config 00:02:00.696 net/mana: not in enabled drivers build config 00:02:00.696 net/memif: not in enabled drivers build config 00:02:00.696 net/mlx4: not in enabled drivers build config 00:02:00.696 net/mlx5: not in enabled drivers build config 00:02:00.696 net/mvneta: not in enabled drivers build config 00:02:00.696 net/mvpp2: not in enabled drivers build config 00:02:00.696 net/netvsc: not in enabled drivers build config 00:02:00.696 net/nfb: not in enabled drivers build config 00:02:00.696 net/nfp: not in enabled drivers build config 00:02:00.696 net/ngbe: not in enabled drivers build config 00:02:00.696 net/null: not in enabled drivers build config 00:02:00.696 net/octeontx: not in enabled drivers build config 00:02:00.696 net/octeon_ep: not in enabled drivers build config 00:02:00.696 net/pcap: not in enabled drivers build config 00:02:00.696 net/pfe: not in enabled drivers build config 00:02:00.696 net/qede: not in enabled drivers build config 00:02:00.696 net/ring: not in enabled drivers build config 00:02:00.696 net/sfc: not in enabled drivers build config 00:02:00.696 net/softnic: not in enabled drivers build config 00:02:00.696 net/tap: not in enabled drivers build config 00:02:00.696 net/thunderx: not in enabled drivers build config 00:02:00.696 net/txgbe: not in enabled drivers build config 00:02:00.696 net/vdev_netvsc: not in enabled drivers build config 00:02:00.696 net/vhost: not in enabled drivers build config 00:02:00.696 net/virtio: not in enabled drivers build config 00:02:00.696 net/vmxnet3: not in enabled drivers build config 00:02:00.696 raw/*: missing internal dependency, "rawdev" 00:02:00.696 crypto/armv8: not in enabled drivers build config 00:02:00.696 crypto/bcmfs: not in enabled drivers build config 00:02:00.696 crypto/caam_jr: not in enabled drivers build config 00:02:00.696 crypto/ccp: not in enabled drivers build config 00:02:00.696 crypto/cnxk: not in enabled drivers build config 00:02:00.696 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.696 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.696 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.696 crypto/mlx5: not in enabled drivers build config 00:02:00.696 crypto/mvsam: not in enabled drivers build config 00:02:00.696 crypto/nitrox: not in enabled drivers build config 00:02:00.696 crypto/null: not in enabled drivers build config 00:02:00.696 crypto/octeontx: not in enabled drivers build config 00:02:00.696 crypto/openssl: not in enabled drivers build config 00:02:00.696 crypto/scheduler: not in enabled drivers build config 00:02:00.696 crypto/uadk: not in enabled drivers build config 00:02:00.696 crypto/virtio: not in enabled drivers build config 00:02:00.696 compress/isal: not in enabled drivers build config 00:02:00.696 compress/mlx5: not in enabled drivers build config 00:02:00.696 compress/nitrox: not in enabled drivers build config 00:02:00.696 compress/octeontx: not in enabled drivers build config 00:02:00.696 compress/zlib: not in enabled drivers build config 00:02:00.696 regex/*: missing internal dependency, "regexdev" 00:02:00.696 ml/*: missing internal dependency, "mldev" 00:02:00.696 vdpa/ifc: not in enabled drivers build config 00:02:00.696 vdpa/mlx5: not in enabled drivers build config 00:02:00.696 vdpa/nfp: not in enabled drivers build config 00:02:00.696 vdpa/sfc: not in enabled drivers build config 00:02:00.696 event/*: missing internal dependency, "eventdev" 00:02:00.696 baseband/*: missing internal dependency, "bbdev" 00:02:00.696 gpu/*: missing internal dependency, "gpudev" 00:02:00.696 00:02:00.696 00:02:00.696 Build targets in project: 85 00:02:00.696 00:02:00.696 DPDK 24.03.0 00:02:00.696 00:02:00.696 User defined options 00:02:00.696 buildtype : debug 00:02:00.696 default_library : shared 00:02:00.696 libdir : lib 00:02:00.696 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:00.696 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:00.696 c_link_args : 00:02:00.696 cpu_instruction_set: native 00:02:00.696 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:00.697 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:00.697 enable_docs : false 00:02:00.697 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:00.697 enable_kmods : false 00:02:00.697 max_lcores : 128 00:02:00.697 tests : false 00:02:00.697 00:02:00.697 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.955 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:00.955 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.955 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.955 [3/268] Linking static target lib/librte_kvargs.a 00:02:00.955 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.955 [5/268] Linking static target lib/librte_log.a 00:02:00.955 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.522 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.522 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.522 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.522 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.522 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.780 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.780 [13/268] Linking static target lib/librte_telemetry.a 00:02:01.780 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.780 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.780 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.780 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.039 [18/268] Linking target lib/librte_log.so.24.1 00:02:02.039 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.039 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.297 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:02.297 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:02.556 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.556 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.556 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:02.556 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.556 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:02.556 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.815 [29/268] Linking target lib/librte_telemetry.so.24.1 00:02:02.815 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:02.815 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:02.815 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.074 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.074 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:03.074 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.333 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.333 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.333 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.591 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.591 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.591 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.591 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.591 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.591 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.854 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.854 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.854 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.120 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.120 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.378 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.378 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:04.637 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.637 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.637 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:04.897 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:04.897 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:04.897 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:04.897 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.156 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.156 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.156 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.156 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.414 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.673 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.673 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.673 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.673 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.932 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.932 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.932 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.190 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.190 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.190 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.190 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.190 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.190 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.448 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.706 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.706 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.706 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.706 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.964 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.964 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.964 [84/268] Linking static target lib/librte_ring.a 00:02:07.223 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.223 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.223 [87/268] Linking static target lib/librte_eal.a 00:02:07.482 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.482 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.482 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.482 [91/268] Linking static target lib/librte_rcu.a 00:02:07.741 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.741 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.741 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.741 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.741 [96/268] Linking static target lib/librte_mempool.a 00:02:07.998 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.256 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.256 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:08.256 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:08.256 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.514 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.514 [103/268] Linking static target lib/librte_mbuf.a 00:02:08.514 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.773 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.773 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.773 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:09.031 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:09.031 [109/268] Linking static target lib/librte_net.a 00:02:09.031 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.031 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:09.031 [112/268] Linking static target lib/librte_meter.a 00:02:09.031 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.289 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:09.289 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:09.546 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.546 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.546 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:09.546 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.803 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.803 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:10.061 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:10.320 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:10.320 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:10.320 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:10.320 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.320 [127/268] Linking static target lib/librte_pci.a 00:02:10.320 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:10.579 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:10.579 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:10.837 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:10.837 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:10.837 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:10.837 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.837 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:10.837 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.837 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:10.837 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:10.837 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.837 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:10.837 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:11.103 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:11.103 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:11.103 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:11.103 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:11.103 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.103 [147/268] Linking static target lib/librte_ethdev.a 00:02:11.382 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:11.382 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:11.641 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:11.641 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:11.641 [152/268] Linking static target lib/librte_cmdline.a 00:02:11.641 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:11.898 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.898 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.898 [156/268] Linking static target lib/librte_hash.a 00:02:12.156 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.156 [158/268] Linking static target lib/librte_timer.a 00:02:12.156 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.156 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:12.156 [161/268] Linking static target lib/librte_compressdev.a 00:02:12.156 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.413 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.672 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:12.672 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.672 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.672 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:12.930 [168/268] Linking static target lib/librte_dmadev.a 00:02:12.930 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.188 [170/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.188 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.188 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.188 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.188 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:13.446 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.446 [176/268] Linking static target lib/librte_cryptodev.a 00:02:13.446 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.446 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.756 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.756 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.756 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.014 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.014 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.014 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.271 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.271 [186/268] Linking static target lib/librte_reorder.a 00:02:14.528 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.528 [188/268] Linking static target lib/librte_security.a 00:02:14.528 [189/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:14.528 [190/268] Linking static target lib/librte_power.a 00:02:14.528 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.786 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.786 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.045 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:15.045 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.303 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.563 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.563 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.563 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.821 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.821 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:16.079 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:16.079 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.079 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.337 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:16.337 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:16.337 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.337 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:16.337 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.337 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:16.595 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.595 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.595 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:16.595 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.595 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.595 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:16.595 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:16.596 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.596 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.596 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:16.854 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:16.854 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:16.854 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.854 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.854 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.854 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:16.854 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.112 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.046 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:18.046 [230/268] Linking static target lib/librte_vhost.a 00:02:18.612 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.612 [232/268] Linking target lib/librte_eal.so.24.1 00:02:18.870 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:18.870 [234/268] Linking target lib/librte_meter.so.24.1 00:02:18.870 [235/268] Linking target lib/librte_ring.so.24.1 00:02:18.870 [236/268] Linking target lib/librte_pci.so.24.1 00:02:18.870 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:18.870 [238/268] Linking target lib/librte_timer.so.24.1 00:02:18.870 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:19.128 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:19.128 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:19.128 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:19.128 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:19.128 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:19.128 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:19.128 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:19.128 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:19.128 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.386 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:19.386 [250/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.386 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:19.386 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:19.386 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:19.386 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:19.386 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:19.386 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:19.644 [257/268] Linking target lib/librte_net.so.24.1 00:02:19.644 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:19.644 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:19.644 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:19.644 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:19.644 [262/268] Linking target lib/librte_hash.so.24.1 00:02:19.644 [263/268] Linking target lib/librte_security.so.24.1 00:02:19.903 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:19.903 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:19.903 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:19.903 [267/268] Linking target lib/librte_power.so.24.1 00:02:19.903 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:19.903 INFO: autodetecting backend as ninja 00:02:19.903 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:21.292 CC lib/ut/ut.o 00:02:21.292 CC lib/ut_mock/mock.o 00:02:21.292 CC lib/log/log.o 00:02:21.292 CC lib/log/log_flags.o 00:02:21.292 CC lib/log/log_deprecated.o 00:02:21.292 LIB libspdk_ut.a 00:02:21.292 SO libspdk_ut.so.2.0 00:02:21.292 LIB libspdk_ut_mock.a 00:02:21.292 LIB libspdk_log.a 00:02:21.292 SO libspdk_ut_mock.so.6.0 00:02:21.292 SO libspdk_log.so.7.0 00:02:21.551 SYMLINK libspdk_ut.so 00:02:21.551 SYMLINK libspdk_ut_mock.so 00:02:21.551 SYMLINK libspdk_log.so 00:02:21.551 CC lib/util/base64.o 00:02:21.551 CC lib/util/bit_array.o 00:02:21.551 CC lib/util/cpuset.o 00:02:21.551 CC lib/util/crc16.o 00:02:21.551 CC lib/util/crc32.o 00:02:21.551 CC lib/util/crc32c.o 00:02:21.551 CC lib/dma/dma.o 00:02:21.551 CC lib/ioat/ioat.o 00:02:21.551 CXX lib/trace_parser/trace.o 00:02:21.809 CC lib/vfio_user/host/vfio_user_pci.o 00:02:21.809 CC lib/vfio_user/host/vfio_user.o 00:02:21.809 CC lib/util/crc32_ieee.o 00:02:21.809 CC lib/util/crc64.o 00:02:21.809 CC lib/util/dif.o 00:02:21.809 CC lib/util/fd.o 00:02:22.067 CC lib/util/fd_group.o 00:02:22.067 LIB libspdk_dma.a 00:02:22.067 CC lib/util/file.o 00:02:22.067 SO libspdk_dma.so.4.0 00:02:22.067 CC lib/util/hexlify.o 00:02:22.067 CC lib/util/iov.o 00:02:22.067 SYMLINK libspdk_dma.so 00:02:22.067 CC lib/util/math.o 00:02:22.067 CC lib/util/net.o 00:02:22.067 LIB libspdk_ioat.a 00:02:22.067 LIB libspdk_vfio_user.a 00:02:22.067 CC lib/util/pipe.o 00:02:22.067 SO libspdk_ioat.so.7.0 00:02:22.067 SO libspdk_vfio_user.so.5.0 00:02:22.325 CC lib/util/strerror_tls.o 00:02:22.325 SYMLINK libspdk_ioat.so 00:02:22.325 CC lib/util/string.o 00:02:22.325 SYMLINK libspdk_vfio_user.so 00:02:22.325 CC lib/util/uuid.o 00:02:22.325 CC lib/util/xor.o 00:02:22.325 CC lib/util/zipf.o 00:02:22.584 LIB libspdk_util.a 00:02:22.584 SO libspdk_util.so.9.1 00:02:22.584 LIB libspdk_trace_parser.a 00:02:22.843 SO libspdk_trace_parser.so.5.0 00:02:22.843 SYMLINK libspdk_util.so 00:02:22.843 SYMLINK libspdk_trace_parser.so 00:02:23.106 CC lib/idxd/idxd.o 00:02:23.106 CC lib/rdma_provider/common.o 00:02:23.106 CC lib/idxd/idxd_kernel.o 00:02:23.106 CC lib/idxd/idxd_user.o 00:02:23.106 CC lib/vmd/vmd.o 00:02:23.106 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:23.106 CC lib/conf/conf.o 00:02:23.106 CC lib/env_dpdk/env.o 00:02:23.106 CC lib/json/json_parse.o 00:02:23.106 CC lib/rdma_utils/rdma_utils.o 00:02:23.106 CC lib/vmd/led.o 00:02:23.106 CC lib/env_dpdk/memory.o 00:02:23.106 LIB libspdk_conf.a 00:02:23.382 SO libspdk_conf.so.6.0 00:02:23.382 LIB libspdk_rdma_provider.a 00:02:23.382 SYMLINK libspdk_conf.so 00:02:23.382 CC lib/env_dpdk/pci.o 00:02:23.382 CC lib/env_dpdk/init.o 00:02:23.382 SO libspdk_rdma_provider.so.6.0 00:02:23.382 CC lib/json/json_util.o 00:02:23.382 CC lib/json/json_write.o 00:02:23.382 LIB libspdk_rdma_utils.a 00:02:23.382 SYMLINK libspdk_rdma_provider.so 00:02:23.641 CC lib/env_dpdk/threads.o 00:02:23.641 SO libspdk_rdma_utils.so.1.0 00:02:23.641 SYMLINK libspdk_rdma_utils.so 00:02:23.641 CC lib/env_dpdk/pci_ioat.o 00:02:23.641 CC lib/env_dpdk/pci_virtio.o 00:02:23.641 LIB libspdk_idxd.a 00:02:23.641 LIB libspdk_vmd.a 00:02:23.641 SO libspdk_vmd.so.6.0 00:02:23.641 SO libspdk_idxd.so.12.0 00:02:23.900 CC lib/env_dpdk/pci_vmd.o 00:02:23.900 CC lib/env_dpdk/pci_idxd.o 00:02:23.900 CC lib/env_dpdk/pci_event.o 00:02:23.900 SYMLINK libspdk_vmd.so 00:02:23.900 CC lib/env_dpdk/sigbus_handler.o 00:02:23.900 SYMLINK libspdk_idxd.so 00:02:23.900 CC lib/env_dpdk/pci_dpdk.o 00:02:23.900 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.900 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.900 LIB libspdk_json.a 00:02:23.900 SO libspdk_json.so.6.0 00:02:23.900 SYMLINK libspdk_json.so 00:02:24.159 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:24.159 CC lib/jsonrpc/jsonrpc_server.o 00:02:24.159 CC lib/jsonrpc/jsonrpc_client.o 00:02:24.159 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:24.418 LIB libspdk_env_dpdk.a 00:02:24.418 LIB libspdk_jsonrpc.a 00:02:24.418 SO libspdk_env_dpdk.so.15.0 00:02:24.418 SO libspdk_jsonrpc.so.6.0 00:02:24.676 SYMLINK libspdk_jsonrpc.so 00:02:24.676 SYMLINK libspdk_env_dpdk.so 00:02:24.934 CC lib/rpc/rpc.o 00:02:25.193 LIB libspdk_rpc.a 00:02:25.193 SO libspdk_rpc.so.6.0 00:02:25.193 SYMLINK libspdk_rpc.so 00:02:25.452 CC lib/keyring/keyring.o 00:02:25.452 CC lib/keyring/keyring_rpc.o 00:02:25.452 CC lib/notify/notify.o 00:02:25.452 CC lib/trace/trace.o 00:02:25.452 CC lib/trace/trace_flags.o 00:02:25.452 CC lib/notify/notify_rpc.o 00:02:25.452 CC lib/trace/trace_rpc.o 00:02:25.711 LIB libspdk_keyring.a 00:02:25.711 LIB libspdk_notify.a 00:02:25.711 SO libspdk_keyring.so.1.0 00:02:25.711 SO libspdk_notify.so.6.0 00:02:25.711 LIB libspdk_trace.a 00:02:25.711 SYMLINK libspdk_keyring.so 00:02:25.711 SO libspdk_trace.so.10.0 00:02:25.711 SYMLINK libspdk_notify.so 00:02:25.711 SYMLINK libspdk_trace.so 00:02:25.968 CC lib/thread/thread.o 00:02:25.968 CC lib/thread/iobuf.o 00:02:25.968 CC lib/sock/sock.o 00:02:25.968 CC lib/sock/sock_rpc.o 00:02:26.557 LIB libspdk_sock.a 00:02:26.557 SO libspdk_sock.so.10.0 00:02:26.557 SYMLINK libspdk_sock.so 00:02:26.853 CC lib/nvme/nvme_ctrlr.o 00:02:26.854 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:26.854 CC lib/nvme/nvme_ns_cmd.o 00:02:26.854 CC lib/nvme/nvme_fabric.o 00:02:26.854 CC lib/nvme/nvme_pcie_common.o 00:02:26.854 CC lib/nvme/nvme_ns.o 00:02:26.854 CC lib/nvme/nvme_pcie.o 00:02:26.854 CC lib/nvme/nvme_qpair.o 00:02:26.854 CC lib/nvme/nvme.o 00:02:27.811 CC lib/nvme/nvme_quirks.o 00:02:27.811 CC lib/nvme/nvme_transport.o 00:02:27.811 CC lib/nvme/nvme_discovery.o 00:02:28.070 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:28.070 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:28.070 LIB libspdk_thread.a 00:02:28.328 CC lib/nvme/nvme_tcp.o 00:02:28.328 SO libspdk_thread.so.10.1 00:02:28.328 CC lib/nvme/nvme_opal.o 00:02:28.328 CC lib/nvme/nvme_io_msg.o 00:02:28.328 SYMLINK libspdk_thread.so 00:02:28.328 CC lib/nvme/nvme_poll_group.o 00:02:28.587 CC lib/nvme/nvme_zns.o 00:02:28.845 CC lib/accel/accel.o 00:02:28.845 CC lib/nvme/nvme_stubs.o 00:02:29.107 CC lib/accel/accel_rpc.o 00:02:29.107 CC lib/accel/accel_sw.o 00:02:29.107 CC lib/nvme/nvme_auth.o 00:02:29.107 CC lib/nvme/nvme_cuse.o 00:02:29.365 CC lib/nvme/nvme_rdma.o 00:02:29.623 CC lib/blob/blobstore.o 00:02:29.623 CC lib/blob/request.o 00:02:29.623 CC lib/init/json_config.o 00:02:29.880 CC lib/virtio/virtio.o 00:02:29.880 LIB libspdk_accel.a 00:02:30.138 CC lib/virtio/virtio_vhost_user.o 00:02:30.138 SO libspdk_accel.so.15.1 00:02:30.138 CC lib/init/subsystem.o 00:02:30.138 SYMLINK libspdk_accel.so 00:02:30.138 CC lib/init/subsystem_rpc.o 00:02:30.138 CC lib/init/rpc.o 00:02:30.396 CC lib/blob/zeroes.o 00:02:30.396 CC lib/virtio/virtio_vfio_user.o 00:02:30.396 CC lib/virtio/virtio_pci.o 00:02:30.396 CC lib/blob/blob_bs_dev.o 00:02:30.396 LIB libspdk_init.a 00:02:30.396 CC lib/bdev/bdev.o 00:02:30.396 SO libspdk_init.so.5.0 00:02:30.655 CC lib/bdev/bdev_rpc.o 00:02:30.655 CC lib/bdev/bdev_zone.o 00:02:30.655 CC lib/bdev/part.o 00:02:30.655 SYMLINK libspdk_init.so 00:02:30.655 CC lib/bdev/scsi_nvme.o 00:02:30.915 LIB libspdk_virtio.a 00:02:30.915 SO libspdk_virtio.so.7.0 00:02:30.915 CC lib/event/app.o 00:02:30.915 CC lib/event/reactor.o 00:02:30.915 CC lib/event/log_rpc.o 00:02:30.915 CC lib/event/app_rpc.o 00:02:30.915 SYMLINK libspdk_virtio.so 00:02:30.915 CC lib/event/scheduler_static.o 00:02:31.511 LIB libspdk_nvme.a 00:02:31.511 LIB libspdk_event.a 00:02:31.511 SO libspdk_event.so.14.0 00:02:31.769 SYMLINK libspdk_event.so 00:02:31.769 SO libspdk_nvme.so.13.1 00:02:32.028 SYMLINK libspdk_nvme.so 00:02:32.959 LIB libspdk_blob.a 00:02:32.959 SO libspdk_blob.so.11.0 00:02:32.959 SYMLINK libspdk_blob.so 00:02:33.216 CC lib/lvol/lvol.o 00:02:33.216 CC lib/blobfs/blobfs.o 00:02:33.216 CC lib/blobfs/tree.o 00:02:33.779 LIB libspdk_bdev.a 00:02:34.036 SO libspdk_bdev.so.15.1 00:02:34.036 SYMLINK libspdk_bdev.so 00:02:34.292 CC lib/nbd/nbd_rpc.o 00:02:34.292 CC lib/nbd/nbd.o 00:02:34.292 CC lib/nvmf/ctrlr.o 00:02:34.292 CC lib/nvmf/ctrlr_discovery.o 00:02:34.292 CC lib/ublk/ublk.o 00:02:34.292 CC lib/nvmf/ctrlr_bdev.o 00:02:34.292 CC lib/scsi/dev.o 00:02:34.292 CC lib/ftl/ftl_core.o 00:02:34.549 LIB libspdk_blobfs.a 00:02:34.549 SO libspdk_blobfs.so.10.0 00:02:34.549 LIB libspdk_lvol.a 00:02:34.549 SO libspdk_lvol.so.10.0 00:02:34.549 SYMLINK libspdk_blobfs.so 00:02:34.549 CC lib/ftl/ftl_init.o 00:02:34.549 CC lib/scsi/lun.o 00:02:34.549 SYMLINK libspdk_lvol.so 00:02:34.549 CC lib/scsi/port.o 00:02:34.806 CC lib/scsi/scsi.o 00:02:34.806 CC lib/scsi/scsi_bdev.o 00:02:35.063 LIB libspdk_nbd.a 00:02:35.063 CC lib/ftl/ftl_layout.o 00:02:35.063 SO libspdk_nbd.so.7.0 00:02:35.063 CC lib/ftl/ftl_debug.o 00:02:35.063 CC lib/ublk/ublk_rpc.o 00:02:35.063 SYMLINK libspdk_nbd.so 00:02:35.063 CC lib/scsi/scsi_pr.o 00:02:35.319 CC lib/scsi/scsi_rpc.o 00:02:35.319 CC lib/scsi/task.o 00:02:35.319 CC lib/nvmf/subsystem.o 00:02:35.319 CC lib/ftl/ftl_io.o 00:02:35.319 LIB libspdk_ublk.a 00:02:35.319 SO libspdk_ublk.so.3.0 00:02:35.575 CC lib/ftl/ftl_sb.o 00:02:35.575 CC lib/ftl/ftl_l2p.o 00:02:35.575 SYMLINK libspdk_ublk.so 00:02:35.575 CC lib/nvmf/nvmf.o 00:02:35.575 CC lib/nvmf/nvmf_rpc.o 00:02:35.575 CC lib/nvmf/transport.o 00:02:35.575 CC lib/nvmf/tcp.o 00:02:35.575 CC lib/nvmf/stubs.o 00:02:35.575 LIB libspdk_scsi.a 00:02:35.833 CC lib/nvmf/mdns_server.o 00:02:35.833 SO libspdk_scsi.so.9.0 00:02:35.833 CC lib/ftl/ftl_l2p_flat.o 00:02:35.833 SYMLINK libspdk_scsi.so 00:02:35.833 CC lib/ftl/ftl_nv_cache.o 00:02:36.398 CC lib/nvmf/rdma.o 00:02:36.398 CC lib/nvmf/auth.o 00:02:36.657 CC lib/ftl/ftl_band.o 00:02:36.657 CC lib/iscsi/conn.o 00:02:36.657 CC lib/iscsi/init_grp.o 00:02:36.657 CC lib/vhost/vhost.o 00:02:36.915 CC lib/iscsi/iscsi.o 00:02:37.173 CC lib/iscsi/md5.o 00:02:37.173 CC lib/iscsi/param.o 00:02:37.431 CC lib/ftl/ftl_band_ops.o 00:02:37.431 CC lib/iscsi/portal_grp.o 00:02:37.431 CC lib/iscsi/tgt_node.o 00:02:37.431 CC lib/vhost/vhost_rpc.o 00:02:37.690 CC lib/vhost/vhost_scsi.o 00:02:37.690 CC lib/ftl/ftl_writer.o 00:02:37.690 CC lib/iscsi/iscsi_subsystem.o 00:02:37.690 CC lib/vhost/vhost_blk.o 00:02:37.948 CC lib/vhost/rte_vhost_user.o 00:02:37.948 CC lib/ftl/ftl_rq.o 00:02:38.207 CC lib/ftl/ftl_reloc.o 00:02:38.207 CC lib/iscsi/iscsi_rpc.o 00:02:38.207 CC lib/iscsi/task.o 00:02:38.465 CC lib/ftl/ftl_l2p_cache.o 00:02:38.465 CC lib/ftl/ftl_p2l.o 00:02:38.465 CC lib/ftl/mngt/ftl_mngt.o 00:02:38.465 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:38.465 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:38.465 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:38.724 LIB libspdk_iscsi.a 00:02:38.724 SO libspdk_iscsi.so.8.0 00:02:38.724 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:38.724 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:38.724 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:38.724 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:38.983 SYMLINK libspdk_iscsi.so 00:02:38.983 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:38.983 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:38.983 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:38.983 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:38.983 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:39.241 CC lib/ftl/utils/ftl_conf.o 00:02:39.241 CC lib/ftl/utils/ftl_md.o 00:02:39.241 CC lib/ftl/utils/ftl_mempool.o 00:02:39.241 CC lib/ftl/utils/ftl_bitmap.o 00:02:39.241 CC lib/ftl/utils/ftl_property.o 00:02:39.241 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:39.497 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:39.497 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:39.497 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:39.497 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:39.498 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:39.498 LIB libspdk_vhost.a 00:02:39.755 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:39.755 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:39.755 SO libspdk_vhost.so.8.0 00:02:39.755 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:39.755 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:39.755 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:39.755 CC lib/ftl/base/ftl_base_dev.o 00:02:39.755 CC lib/ftl/base/ftl_base_bdev.o 00:02:39.755 CC lib/ftl/ftl_trace.o 00:02:39.755 SYMLINK libspdk_vhost.so 00:02:39.755 LIB libspdk_nvmf.a 00:02:40.012 SO libspdk_nvmf.so.19.0 00:02:40.271 SYMLINK libspdk_nvmf.so 00:02:40.271 LIB libspdk_ftl.a 00:02:40.529 SO libspdk_ftl.so.9.0 00:02:40.788 SYMLINK libspdk_ftl.so 00:02:41.046 CC module/env_dpdk/env_dpdk_rpc.o 00:02:41.303 CC module/blob/bdev/blob_bdev.o 00:02:41.303 CC module/keyring/linux/keyring.o 00:02:41.303 CC module/accel/ioat/accel_ioat.o 00:02:41.303 CC module/accel/dsa/accel_dsa.o 00:02:41.303 CC module/accel/error/accel_error.o 00:02:41.303 CC module/sock/uring/uring.o 00:02:41.303 CC module/keyring/file/keyring.o 00:02:41.303 CC module/sock/posix/posix.o 00:02:41.303 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:41.303 LIB libspdk_env_dpdk_rpc.a 00:02:41.561 SO libspdk_env_dpdk_rpc.so.6.0 00:02:41.561 CC module/accel/error/accel_error_rpc.o 00:02:41.561 CC module/keyring/linux/keyring_rpc.o 00:02:41.561 SYMLINK libspdk_env_dpdk_rpc.so 00:02:41.561 CC module/accel/dsa/accel_dsa_rpc.o 00:02:41.561 CC module/keyring/file/keyring_rpc.o 00:02:41.561 CC module/accel/ioat/accel_ioat_rpc.o 00:02:41.561 LIB libspdk_blob_bdev.a 00:02:41.561 LIB libspdk_scheduler_dynamic.a 00:02:41.561 SO libspdk_scheduler_dynamic.so.4.0 00:02:41.561 SO libspdk_blob_bdev.so.11.0 00:02:41.819 LIB libspdk_accel_error.a 00:02:41.819 LIB libspdk_keyring_linux.a 00:02:41.819 SYMLINK libspdk_scheduler_dynamic.so 00:02:41.819 SO libspdk_accel_error.so.2.0 00:02:41.819 LIB libspdk_keyring_file.a 00:02:41.819 SO libspdk_keyring_linux.so.1.0 00:02:41.819 SYMLINK libspdk_blob_bdev.so 00:02:41.819 LIB libspdk_accel_dsa.a 00:02:41.819 LIB libspdk_accel_ioat.a 00:02:41.819 CC module/accel/iaa/accel_iaa.o 00:02:41.819 CC module/accel/iaa/accel_iaa_rpc.o 00:02:41.819 SO libspdk_keyring_file.so.1.0 00:02:41.819 SO libspdk_accel_dsa.so.5.0 00:02:41.819 SYMLINK libspdk_keyring_linux.so 00:02:41.819 SO libspdk_accel_ioat.so.6.0 00:02:41.819 SYMLINK libspdk_accel_error.so 00:02:41.819 SYMLINK libspdk_accel_dsa.so 00:02:41.819 SYMLINK libspdk_keyring_file.so 00:02:41.819 SYMLINK libspdk_accel_ioat.so 00:02:42.077 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:42.077 CC module/scheduler/gscheduler/gscheduler.o 00:02:42.077 LIB libspdk_accel_iaa.a 00:02:42.336 CC module/bdev/gpt/gpt.o 00:02:42.336 CC module/bdev/error/vbdev_error.o 00:02:42.336 SO libspdk_accel_iaa.so.3.0 00:02:42.336 CC module/bdev/delay/vbdev_delay.o 00:02:42.336 CC module/bdev/lvol/vbdev_lvol.o 00:02:42.336 LIB libspdk_scheduler_gscheduler.a 00:02:42.336 LIB libspdk_scheduler_dpdk_governor.a 00:02:42.336 CC module/blobfs/bdev/blobfs_bdev.o 00:02:42.336 SO libspdk_scheduler_gscheduler.so.4.0 00:02:42.336 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:42.336 SYMLINK libspdk_accel_iaa.so 00:02:42.336 LIB libspdk_sock_uring.a 00:02:42.336 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:42.336 SO libspdk_sock_uring.so.5.0 00:02:42.336 SYMLINK libspdk_scheduler_gscheduler.so 00:02:42.336 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:42.336 CC module/bdev/gpt/vbdev_gpt.o 00:02:42.336 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:42.595 LIB libspdk_sock_posix.a 00:02:42.595 SYMLINK libspdk_sock_uring.so 00:02:42.595 CC module/bdev/error/vbdev_error_rpc.o 00:02:42.595 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:42.595 SO libspdk_sock_posix.so.6.0 00:02:42.595 LIB libspdk_blobfs_bdev.a 00:02:42.595 SO libspdk_blobfs_bdev.so.6.0 00:02:42.595 SYMLINK libspdk_sock_posix.so 00:02:42.595 CC module/bdev/malloc/bdev_malloc.o 00:02:42.595 LIB libspdk_bdev_error.a 00:02:42.595 LIB libspdk_bdev_gpt.a 00:02:42.595 SO libspdk_bdev_error.so.6.0 00:02:42.595 SYMLINK libspdk_blobfs_bdev.so 00:02:42.854 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:42.854 SO libspdk_bdev_gpt.so.6.0 00:02:42.854 SYMLINK libspdk_bdev_error.so 00:02:42.854 SYMLINK libspdk_bdev_gpt.so 00:02:42.854 CC module/bdev/null/bdev_null.o 00:02:42.854 LIB libspdk_bdev_delay.a 00:02:42.854 CC module/bdev/nvme/bdev_nvme.o 00:02:42.854 SO libspdk_bdev_delay.so.6.0 00:02:42.854 CC module/bdev/null/bdev_null_rpc.o 00:02:42.854 LIB libspdk_bdev_lvol.a 00:02:42.854 CC module/bdev/passthru/vbdev_passthru.o 00:02:42.854 CC module/bdev/raid/bdev_raid.o 00:02:43.113 SO libspdk_bdev_lvol.so.6.0 00:02:43.113 CC module/bdev/split/vbdev_split.o 00:02:43.113 SYMLINK libspdk_bdev_delay.so 00:02:43.113 CC module/bdev/split/vbdev_split_rpc.o 00:02:43.113 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:43.113 SYMLINK libspdk_bdev_lvol.so 00:02:43.113 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:43.113 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:43.113 LIB libspdk_bdev_malloc.a 00:02:43.113 LIB libspdk_bdev_null.a 00:02:43.113 SO libspdk_bdev_malloc.so.6.0 00:02:43.113 SO libspdk_bdev_null.so.6.0 00:02:43.372 SYMLINK libspdk_bdev_malloc.so 00:02:43.372 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:43.372 CC module/bdev/raid/bdev_raid_rpc.o 00:02:43.372 SYMLINK libspdk_bdev_null.so 00:02:43.372 LIB libspdk_bdev_zone_block.a 00:02:43.372 LIB libspdk_bdev_split.a 00:02:43.372 SO libspdk_bdev_split.so.6.0 00:02:43.372 CC module/bdev/raid/bdev_raid_sb.o 00:02:43.630 SO libspdk_bdev_zone_block.so.6.0 00:02:43.630 LIB libspdk_bdev_passthru.a 00:02:43.630 SYMLINK libspdk_bdev_split.so 00:02:43.630 CC module/bdev/nvme/nvme_rpc.o 00:02:43.630 CC module/bdev/aio/bdev_aio.o 00:02:43.630 SYMLINK libspdk_bdev_zone_block.so 00:02:43.630 SO libspdk_bdev_passthru.so.6.0 00:02:43.630 CC module/bdev/uring/bdev_uring.o 00:02:43.889 SYMLINK libspdk_bdev_passthru.so 00:02:43.889 CC module/bdev/uring/bdev_uring_rpc.o 00:02:43.889 CC module/bdev/ftl/bdev_ftl.o 00:02:43.889 CC module/bdev/iscsi/bdev_iscsi.o 00:02:43.889 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:43.889 CC module/bdev/raid/raid0.o 00:02:43.889 CC module/bdev/aio/bdev_aio_rpc.o 00:02:44.147 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:44.147 LIB libspdk_bdev_uring.a 00:02:44.147 SO libspdk_bdev_uring.so.6.0 00:02:44.147 CC module/bdev/raid/raid1.o 00:02:44.147 LIB libspdk_bdev_ftl.a 00:02:44.147 LIB libspdk_bdev_aio.a 00:02:44.147 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:44.147 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:44.147 SYMLINK libspdk_bdev_uring.so 00:02:44.147 SO libspdk_bdev_ftl.so.6.0 00:02:44.147 SO libspdk_bdev_aio.so.6.0 00:02:44.147 CC module/bdev/raid/concat.o 00:02:44.147 CC module/bdev/nvme/bdev_mdns_client.o 00:02:44.147 CC module/bdev/nvme/vbdev_opal.o 00:02:44.406 SYMLINK libspdk_bdev_ftl.so 00:02:44.406 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:44.406 SYMLINK libspdk_bdev_aio.so 00:02:44.406 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:44.406 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:44.406 LIB libspdk_bdev_iscsi.a 00:02:44.406 SO libspdk_bdev_iscsi.so.6.0 00:02:44.664 SYMLINK libspdk_bdev_iscsi.so 00:02:44.664 LIB libspdk_bdev_raid.a 00:02:44.664 SO libspdk_bdev_raid.so.6.0 00:02:44.664 LIB libspdk_bdev_virtio.a 00:02:44.664 SO libspdk_bdev_virtio.so.6.0 00:02:44.664 SYMLINK libspdk_bdev_raid.so 00:02:44.923 SYMLINK libspdk_bdev_virtio.so 00:02:45.881 LIB libspdk_bdev_nvme.a 00:02:45.881 SO libspdk_bdev_nvme.so.7.0 00:02:46.140 SYMLINK libspdk_bdev_nvme.so 00:02:46.399 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:46.399 CC module/event/subsystems/iobuf/iobuf.o 00:02:46.399 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:46.399 CC module/event/subsystems/vmd/vmd.o 00:02:46.399 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:46.399 CC module/event/subsystems/scheduler/scheduler.o 00:02:46.657 CC module/event/subsystems/keyring/keyring.o 00:02:46.657 CC module/event/subsystems/sock/sock.o 00:02:46.657 LIB libspdk_event_vhost_blk.a 00:02:46.657 SO libspdk_event_vhost_blk.so.3.0 00:02:46.657 LIB libspdk_event_keyring.a 00:02:46.657 LIB libspdk_event_scheduler.a 00:02:46.657 LIB libspdk_event_sock.a 00:02:46.657 SO libspdk_event_keyring.so.1.0 00:02:46.657 LIB libspdk_event_vmd.a 00:02:46.916 LIB libspdk_event_iobuf.a 00:02:46.916 SO libspdk_event_sock.so.5.0 00:02:46.916 SO libspdk_event_scheduler.so.4.0 00:02:46.916 SYMLINK libspdk_event_vhost_blk.so 00:02:46.916 SO libspdk_event_vmd.so.6.0 00:02:46.916 SYMLINK libspdk_event_keyring.so 00:02:46.916 SO libspdk_event_iobuf.so.3.0 00:02:46.916 SYMLINK libspdk_event_scheduler.so 00:02:46.916 SYMLINK libspdk_event_vmd.so 00:02:46.916 SYMLINK libspdk_event_sock.so 00:02:46.916 SYMLINK libspdk_event_iobuf.so 00:02:47.175 CC module/event/subsystems/accel/accel.o 00:02:47.434 LIB libspdk_event_accel.a 00:02:47.434 SO libspdk_event_accel.so.6.0 00:02:47.434 SYMLINK libspdk_event_accel.so 00:02:47.692 CC module/event/subsystems/bdev/bdev.o 00:02:47.950 LIB libspdk_event_bdev.a 00:02:47.951 SO libspdk_event_bdev.so.6.0 00:02:47.951 SYMLINK libspdk_event_bdev.so 00:02:48.211 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:48.211 CC module/event/subsystems/scsi/scsi.o 00:02:48.211 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:48.211 CC module/event/subsystems/nbd/nbd.o 00:02:48.211 CC module/event/subsystems/ublk/ublk.o 00:02:48.468 LIB libspdk_event_nbd.a 00:02:48.468 SO libspdk_event_nbd.so.6.0 00:02:48.468 LIB libspdk_event_ublk.a 00:02:48.468 SO libspdk_event_ublk.so.3.0 00:02:48.468 SYMLINK libspdk_event_nbd.so 00:02:48.468 LIB libspdk_event_scsi.a 00:02:48.468 SO libspdk_event_scsi.so.6.0 00:02:48.468 SYMLINK libspdk_event_ublk.so 00:02:48.468 LIB libspdk_event_nvmf.a 00:02:48.468 SYMLINK libspdk_event_scsi.so 00:02:48.727 SO libspdk_event_nvmf.so.6.0 00:02:48.727 SYMLINK libspdk_event_nvmf.so 00:02:48.727 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:48.727 CC module/event/subsystems/iscsi/iscsi.o 00:02:48.986 LIB libspdk_event_vhost_scsi.a 00:02:48.986 SO libspdk_event_vhost_scsi.so.3.0 00:02:48.986 SYMLINK libspdk_event_vhost_scsi.so 00:02:48.986 LIB libspdk_event_iscsi.a 00:02:49.242 SO libspdk_event_iscsi.so.6.0 00:02:49.242 SYMLINK libspdk_event_iscsi.so 00:02:49.242 SO libspdk.so.6.0 00:02:49.242 SYMLINK libspdk.so 00:02:49.498 CXX app/trace/trace.o 00:02:49.498 CC test/rpc_client/rpc_client_test.o 00:02:49.498 TEST_HEADER include/spdk/accel.h 00:02:49.498 TEST_HEADER include/spdk/accel_module.h 00:02:49.498 CC app/trace_record/trace_record.o 00:02:49.498 TEST_HEADER include/spdk/assert.h 00:02:49.498 TEST_HEADER include/spdk/barrier.h 00:02:49.498 TEST_HEADER include/spdk/base64.h 00:02:49.498 TEST_HEADER include/spdk/bdev.h 00:02:49.498 TEST_HEADER include/spdk/bdev_module.h 00:02:49.498 TEST_HEADER include/spdk/bdev_zone.h 00:02:49.498 TEST_HEADER include/spdk/bit_array.h 00:02:49.498 TEST_HEADER include/spdk/bit_pool.h 00:02:49.498 TEST_HEADER include/spdk/blob_bdev.h 00:02:49.755 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:49.755 TEST_HEADER include/spdk/blobfs.h 00:02:49.755 TEST_HEADER include/spdk/blob.h 00:02:49.755 TEST_HEADER include/spdk/conf.h 00:02:49.755 TEST_HEADER include/spdk/config.h 00:02:49.755 TEST_HEADER include/spdk/cpuset.h 00:02:49.755 TEST_HEADER include/spdk/crc16.h 00:02:49.755 TEST_HEADER include/spdk/crc32.h 00:02:49.755 TEST_HEADER include/spdk/crc64.h 00:02:49.755 TEST_HEADER include/spdk/dif.h 00:02:49.755 TEST_HEADER include/spdk/dma.h 00:02:49.755 TEST_HEADER include/spdk/endian.h 00:02:49.755 TEST_HEADER include/spdk/env_dpdk.h 00:02:49.755 TEST_HEADER include/spdk/env.h 00:02:49.755 TEST_HEADER include/spdk/event.h 00:02:49.755 TEST_HEADER include/spdk/fd_group.h 00:02:49.755 TEST_HEADER include/spdk/fd.h 00:02:49.755 TEST_HEADER include/spdk/file.h 00:02:49.755 TEST_HEADER include/spdk/ftl.h 00:02:49.755 TEST_HEADER include/spdk/gpt_spec.h 00:02:49.755 TEST_HEADER include/spdk/hexlify.h 00:02:49.755 TEST_HEADER include/spdk/histogram_data.h 00:02:49.755 TEST_HEADER include/spdk/idxd.h 00:02:49.755 TEST_HEADER include/spdk/idxd_spec.h 00:02:49.755 TEST_HEADER include/spdk/init.h 00:02:49.755 CC test/thread/poller_perf/poller_perf.o 00:02:49.755 TEST_HEADER include/spdk/ioat.h 00:02:49.755 CC examples/ioat/perf/perf.o 00:02:49.755 TEST_HEADER include/spdk/ioat_spec.h 00:02:49.755 TEST_HEADER include/spdk/iscsi_spec.h 00:02:49.755 CC examples/util/zipf/zipf.o 00:02:49.755 TEST_HEADER include/spdk/json.h 00:02:49.755 TEST_HEADER include/spdk/jsonrpc.h 00:02:49.755 TEST_HEADER include/spdk/keyring.h 00:02:49.755 TEST_HEADER include/spdk/keyring_module.h 00:02:49.756 TEST_HEADER include/spdk/likely.h 00:02:49.756 TEST_HEADER include/spdk/log.h 00:02:49.756 TEST_HEADER include/spdk/lvol.h 00:02:49.756 TEST_HEADER include/spdk/memory.h 00:02:49.756 TEST_HEADER include/spdk/mmio.h 00:02:49.756 CC test/app/bdev_svc/bdev_svc.o 00:02:49.756 TEST_HEADER include/spdk/nbd.h 00:02:49.756 TEST_HEADER include/spdk/net.h 00:02:49.756 TEST_HEADER include/spdk/notify.h 00:02:49.756 TEST_HEADER include/spdk/nvme.h 00:02:49.756 TEST_HEADER include/spdk/nvme_intel.h 00:02:49.756 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:49.756 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:49.756 TEST_HEADER include/spdk/nvme_spec.h 00:02:49.756 TEST_HEADER include/spdk/nvme_zns.h 00:02:49.756 CC test/dma/test_dma/test_dma.o 00:02:49.756 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:49.756 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:49.756 TEST_HEADER include/spdk/nvmf.h 00:02:49.756 TEST_HEADER include/spdk/nvmf_spec.h 00:02:49.756 TEST_HEADER include/spdk/nvmf_transport.h 00:02:49.756 TEST_HEADER include/spdk/opal.h 00:02:49.756 TEST_HEADER include/spdk/opal_spec.h 00:02:49.756 TEST_HEADER include/spdk/pci_ids.h 00:02:49.756 TEST_HEADER include/spdk/pipe.h 00:02:49.756 TEST_HEADER include/spdk/queue.h 00:02:49.756 TEST_HEADER include/spdk/reduce.h 00:02:49.756 TEST_HEADER include/spdk/rpc.h 00:02:49.756 TEST_HEADER include/spdk/scheduler.h 00:02:49.756 TEST_HEADER include/spdk/scsi.h 00:02:49.756 TEST_HEADER include/spdk/scsi_spec.h 00:02:49.756 TEST_HEADER include/spdk/sock.h 00:02:49.756 TEST_HEADER include/spdk/stdinc.h 00:02:49.756 TEST_HEADER include/spdk/string.h 00:02:49.756 TEST_HEADER include/spdk/thread.h 00:02:49.756 TEST_HEADER include/spdk/trace.h 00:02:49.756 TEST_HEADER include/spdk/trace_parser.h 00:02:49.756 TEST_HEADER include/spdk/tree.h 00:02:49.756 TEST_HEADER include/spdk/ublk.h 00:02:49.756 TEST_HEADER include/spdk/util.h 00:02:49.756 CC test/env/mem_callbacks/mem_callbacks.o 00:02:49.756 TEST_HEADER include/spdk/uuid.h 00:02:49.756 TEST_HEADER include/spdk/version.h 00:02:49.756 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:49.756 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:49.756 TEST_HEADER include/spdk/vhost.h 00:02:49.756 TEST_HEADER include/spdk/vmd.h 00:02:49.756 TEST_HEADER include/spdk/xor.h 00:02:49.756 TEST_HEADER include/spdk/zipf.h 00:02:49.756 CXX test/cpp_headers/accel.o 00:02:50.013 LINK rpc_client_test 00:02:50.013 LINK bdev_svc 00:02:50.013 LINK zipf 00:02:50.013 LINK poller_perf 00:02:50.013 LINK spdk_trace_record 00:02:50.013 CXX test/cpp_headers/accel_module.o 00:02:50.013 LINK spdk_trace 00:02:50.013 LINK ioat_perf 00:02:50.271 CXX test/cpp_headers/assert.o 00:02:50.271 CC examples/ioat/verify/verify.o 00:02:50.271 CC test/app/histogram_perf/histogram_perf.o 00:02:50.271 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:50.271 LINK test_dma 00:02:50.529 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:50.529 CXX test/cpp_headers/barrier.o 00:02:50.529 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:50.529 CC app/nvmf_tgt/nvmf_main.o 00:02:50.529 LINK verify 00:02:50.529 LINK histogram_perf 00:02:50.529 CXX test/cpp_headers/base64.o 00:02:50.529 CC examples/thread/thread/thread_ex.o 00:02:50.787 LINK mem_callbacks 00:02:50.787 LINK nvmf_tgt 00:02:50.787 CXX test/cpp_headers/bdev.o 00:02:50.787 CC test/app/jsoncat/jsoncat.o 00:02:50.787 LINK interrupt_tgt 00:02:50.787 CC examples/sock/hello_world/hello_sock.o 00:02:50.787 CC test/event/event_perf/event_perf.o 00:02:51.045 LINK jsoncat 00:02:51.045 LINK thread 00:02:51.045 CC test/env/vtophys/vtophys.o 00:02:51.045 LINK nvme_fuzz 00:02:51.045 LINK event_perf 00:02:51.045 LINK hello_sock 00:02:51.045 CXX test/cpp_headers/bdev_module.o 00:02:51.045 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:51.045 CC app/iscsi_tgt/iscsi_tgt.o 00:02:51.302 CXX test/cpp_headers/bdev_zone.o 00:02:51.302 LINK vtophys 00:02:51.302 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:51.302 CC test/event/reactor/reactor.o 00:02:51.302 LINK iscsi_tgt 00:02:51.302 CXX test/cpp_headers/bit_array.o 00:02:51.302 CC app/spdk_tgt/spdk_tgt.o 00:02:51.560 CC app/spdk_lspci/spdk_lspci.o 00:02:51.560 CC app/spdk_nvme_perf/perf.o 00:02:51.560 LINK reactor 00:02:51.560 CC examples/vmd/lsvmd/lsvmd.o 00:02:51.560 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:51.560 LINK spdk_lspci 00:02:51.818 CXX test/cpp_headers/bit_pool.o 00:02:51.818 LINK vhost_fuzz 00:02:51.818 LINK lsvmd 00:02:51.818 CC app/spdk_nvme_identify/identify.o 00:02:51.818 LINK spdk_tgt 00:02:51.818 CC test/event/reactor_perf/reactor_perf.o 00:02:51.818 LINK env_dpdk_post_init 00:02:52.076 CC app/spdk_nvme_discover/discovery_aer.o 00:02:52.076 CXX test/cpp_headers/blob_bdev.o 00:02:52.076 CC app/spdk_top/spdk_top.o 00:02:52.076 CC examples/vmd/led/led.o 00:02:52.076 LINK reactor_perf 00:02:52.334 CXX test/cpp_headers/blobfs_bdev.o 00:02:52.334 LINK led 00:02:52.334 LINK spdk_nvme_discover 00:02:52.334 CC test/env/memory/memory_ut.o 00:02:52.334 CC test/nvme/aer/aer.o 00:02:52.334 CXX test/cpp_headers/blobfs.o 00:02:52.591 LINK spdk_nvme_perf 00:02:52.591 CC test/event/app_repeat/app_repeat.o 00:02:52.591 CXX test/cpp_headers/blob.o 00:02:52.591 LINK aer 00:02:52.849 CC test/nvme/reset/reset.o 00:02:52.849 LINK app_repeat 00:02:52.849 CC test/nvme/sgl/sgl.o 00:02:52.849 LINK spdk_nvme_identify 00:02:52.849 CC examples/idxd/perf/perf.o 00:02:52.849 CXX test/cpp_headers/conf.o 00:02:52.849 LINK iscsi_fuzz 00:02:53.106 LINK reset 00:02:53.106 CC app/vhost/vhost.o 00:02:53.106 CXX test/cpp_headers/config.o 00:02:53.106 CXX test/cpp_headers/cpuset.o 00:02:53.106 CC test/nvme/e2edp/nvme_dp.o 00:02:53.364 LINK sgl 00:02:53.364 CXX test/cpp_headers/crc16.o 00:02:53.364 CC test/event/scheduler/scheduler.o 00:02:53.364 LINK vhost 00:02:53.364 CC test/app/stub/stub.o 00:02:53.364 CC test/nvme/overhead/overhead.o 00:02:53.364 LINK idxd_perf 00:02:53.364 CXX test/cpp_headers/crc32.o 00:02:53.364 LINK spdk_top 00:02:53.622 CC test/nvme/err_injection/err_injection.o 00:02:53.622 LINK nvme_dp 00:02:53.622 LINK scheduler 00:02:53.622 CXX test/cpp_headers/crc64.o 00:02:53.622 LINK memory_ut 00:02:53.622 LINK stub 00:02:53.622 LINK overhead 00:02:53.622 CC test/nvme/startup/startup.o 00:02:53.879 LINK err_injection 00:02:53.880 CC examples/accel/perf/accel_perf.o 00:02:53.880 CC app/spdk_dd/spdk_dd.o 00:02:53.880 CXX test/cpp_headers/dif.o 00:02:53.880 CC test/env/pci/pci_ut.o 00:02:53.880 LINK startup 00:02:53.880 CC app/fio/nvme/fio_plugin.o 00:02:53.880 CC app/fio/bdev/fio_plugin.o 00:02:54.163 CC test/nvme/reserve/reserve.o 00:02:54.163 CC test/nvme/simple_copy/simple_copy.o 00:02:54.163 CXX test/cpp_headers/dma.o 00:02:54.163 CC test/nvme/connect_stress/connect_stress.o 00:02:54.163 CC test/nvme/boot_partition/boot_partition.o 00:02:54.163 CXX test/cpp_headers/endian.o 00:02:54.163 LINK reserve 00:02:54.163 LINK accel_perf 00:02:54.163 LINK simple_copy 00:02:54.163 LINK spdk_dd 00:02:54.421 LINK pci_ut 00:02:54.421 LINK connect_stress 00:02:54.421 LINK boot_partition 00:02:54.421 CXX test/cpp_headers/env_dpdk.o 00:02:54.421 LINK spdk_bdev 00:02:54.680 CC test/nvme/compliance/nvme_compliance.o 00:02:54.680 LINK spdk_nvme 00:02:54.680 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:54.680 CXX test/cpp_headers/env.o 00:02:54.680 CC test/nvme/fused_ordering/fused_ordering.o 00:02:54.680 CC test/nvme/fdp/fdp.o 00:02:54.680 CC test/nvme/cuse/cuse.o 00:02:54.680 CC examples/blob/hello_world/hello_blob.o 00:02:54.937 CXX test/cpp_headers/event.o 00:02:54.937 CC examples/nvme/hello_world/hello_world.o 00:02:54.937 LINK doorbell_aers 00:02:54.937 LINK fused_ordering 00:02:54.937 CC examples/blob/cli/blobcli.o 00:02:54.937 LINK nvme_compliance 00:02:54.937 CC test/accel/dif/dif.o 00:02:54.937 CXX test/cpp_headers/fd_group.o 00:02:54.937 LINK hello_blob 00:02:55.195 LINK hello_world 00:02:55.195 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:55.195 LINK fdp 00:02:55.195 CC examples/nvme/reconnect/reconnect.o 00:02:55.195 CXX test/cpp_headers/fd.o 00:02:55.195 CXX test/cpp_headers/file.o 00:02:55.195 CXX test/cpp_headers/ftl.o 00:02:55.452 CC test/blobfs/mkfs/mkfs.o 00:02:55.452 LINK blobcli 00:02:55.452 LINK dif 00:02:55.452 CXX test/cpp_headers/gpt_spec.o 00:02:55.452 LINK reconnect 00:02:55.452 CC examples/nvme/arbitration/arbitration.o 00:02:55.452 CC examples/nvme/hotplug/hotplug.o 00:02:55.452 LINK mkfs 00:02:55.710 CXX test/cpp_headers/hexlify.o 00:02:55.710 CXX test/cpp_headers/histogram_data.o 00:02:55.710 CC test/lvol/esnap/esnap.o 00:02:55.710 LINK nvme_manage 00:02:55.968 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:55.968 CXX test/cpp_headers/idxd.o 00:02:55.968 LINK hotplug 00:02:55.968 LINK arbitration 00:02:55.968 CC examples/nvme/abort/abort.o 00:02:55.968 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:55.968 CC test/bdev/bdevio/bdevio.o 00:02:56.225 CXX test/cpp_headers/idxd_spec.o 00:02:56.225 LINK cmb_copy 00:02:56.225 CXX test/cpp_headers/init.o 00:02:56.225 CC examples/bdev/hello_world/hello_bdev.o 00:02:56.225 CXX test/cpp_headers/ioat.o 00:02:56.225 LINK pmr_persistence 00:02:56.225 LINK cuse 00:02:56.483 CXX test/cpp_headers/ioat_spec.o 00:02:56.483 CXX test/cpp_headers/iscsi_spec.o 00:02:56.483 CXX test/cpp_headers/json.o 00:02:56.483 CXX test/cpp_headers/jsonrpc.o 00:02:56.483 LINK bdevio 00:02:56.483 LINK abort 00:02:56.483 CC examples/bdev/bdevperf/bdevperf.o 00:02:56.483 LINK hello_bdev 00:02:56.483 CXX test/cpp_headers/keyring.o 00:02:56.740 CXX test/cpp_headers/keyring_module.o 00:02:56.740 CXX test/cpp_headers/likely.o 00:02:56.740 CXX test/cpp_headers/log.o 00:02:56.740 CXX test/cpp_headers/lvol.o 00:02:56.740 CXX test/cpp_headers/memory.o 00:02:56.740 CXX test/cpp_headers/mmio.o 00:02:56.740 CXX test/cpp_headers/nbd.o 00:02:56.740 CXX test/cpp_headers/net.o 00:02:56.998 CXX test/cpp_headers/notify.o 00:02:56.998 CXX test/cpp_headers/nvme.o 00:02:56.998 CXX test/cpp_headers/nvme_intel.o 00:02:56.998 CXX test/cpp_headers/nvme_ocssd.o 00:02:56.998 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:56.998 CXX test/cpp_headers/nvme_spec.o 00:02:56.998 CXX test/cpp_headers/nvme_zns.o 00:02:57.256 CXX test/cpp_headers/nvmf_cmd.o 00:02:57.256 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:57.256 CXX test/cpp_headers/nvmf.o 00:02:57.256 CXX test/cpp_headers/nvmf_spec.o 00:02:57.256 CXX test/cpp_headers/nvmf_transport.o 00:02:57.256 CXX test/cpp_headers/opal.o 00:02:57.256 CXX test/cpp_headers/opal_spec.o 00:02:57.256 CXX test/cpp_headers/pci_ids.o 00:02:57.514 CXX test/cpp_headers/pipe.o 00:02:57.514 CXX test/cpp_headers/queue.o 00:02:57.514 CXX test/cpp_headers/reduce.o 00:02:57.514 CXX test/cpp_headers/rpc.o 00:02:57.514 CXX test/cpp_headers/scheduler.o 00:02:57.514 CXX test/cpp_headers/scsi.o 00:02:57.514 CXX test/cpp_headers/scsi_spec.o 00:02:57.514 CXX test/cpp_headers/sock.o 00:02:57.514 CXX test/cpp_headers/stdinc.o 00:02:57.514 CXX test/cpp_headers/string.o 00:02:57.772 CXX test/cpp_headers/thread.o 00:02:57.772 CXX test/cpp_headers/trace.o 00:02:57.772 CXX test/cpp_headers/trace_parser.o 00:02:57.772 LINK bdevperf 00:02:57.772 CXX test/cpp_headers/tree.o 00:02:57.772 CXX test/cpp_headers/ublk.o 00:02:57.772 CXX test/cpp_headers/util.o 00:02:57.772 CXX test/cpp_headers/uuid.o 00:02:57.772 CXX test/cpp_headers/version.o 00:02:57.772 CXX test/cpp_headers/vfio_user_pci.o 00:02:57.772 CXX test/cpp_headers/vfio_user_spec.o 00:02:57.772 CXX test/cpp_headers/vhost.o 00:02:58.029 CXX test/cpp_headers/vmd.o 00:02:58.029 CXX test/cpp_headers/xor.o 00:02:58.029 CXX test/cpp_headers/zipf.o 00:02:58.287 CC examples/nvmf/nvmf/nvmf.o 00:02:58.853 LINK nvmf 00:03:02.161 LINK esnap 00:03:02.161 00:03:02.161 real 1m12.882s 00:03:02.161 user 7m20.523s 00:03:02.161 sys 1m44.221s 00:03:02.161 19:40:56 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:02.161 19:40:56 make -- common/autotest_common.sh@10 -- $ set +x 00:03:02.161 ************************************ 00:03:02.161 END TEST make 00:03:02.161 ************************************ 00:03:02.161 19:40:56 -- common/autotest_common.sh@1142 -- $ return 0 00:03:02.161 19:40:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:02.161 19:40:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:02.161 19:40:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:02.161 19:40:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.161 19:40:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:02.161 19:40:56 -- pm/common@44 -- $ pid=5130 00:03:02.161 19:40:56 -- pm/common@50 -- $ kill -TERM 5130 00:03:02.161 19:40:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.161 19:40:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:02.161 19:40:56 -- pm/common@44 -- $ pid=5132 00:03:02.161 19:40:56 -- pm/common@50 -- $ kill -TERM 5132 00:03:02.161 19:40:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:02.161 19:40:56 -- nvmf/common.sh@7 -- # uname -s 00:03:02.161 19:40:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:02.161 19:40:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:02.161 19:40:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:02.161 19:40:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:02.161 19:40:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:02.161 19:40:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:02.161 19:40:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:02.161 19:40:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:02.161 19:40:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:02.161 19:40:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:02.161 19:40:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:03:02.161 19:40:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:03:02.161 19:40:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:02.161 19:40:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:02.161 19:40:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:02.161 19:40:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:02.161 19:40:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:02.161 19:40:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:02.161 19:40:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.161 19:40:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.161 19:40:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.161 19:40:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.161 19:40:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.161 19:40:56 -- paths/export.sh@5 -- # export PATH 00:03:02.161 19:40:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.161 19:40:56 -- nvmf/common.sh@47 -- # : 0 00:03:02.161 19:40:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:02.161 19:40:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:02.161 19:40:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:02.161 19:40:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:02.161 19:40:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:02.161 19:40:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:02.161 19:40:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:02.161 19:40:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:02.161 19:40:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:02.161 19:40:56 -- spdk/autotest.sh@32 -- # uname -s 00:03:02.161 19:40:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:02.161 19:40:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:02.161 19:40:56 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:02.161 19:40:56 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:02.161 19:40:56 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:02.161 19:40:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:02.420 19:40:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:02.420 19:40:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:02.420 19:40:56 -- spdk/autotest.sh@48 -- # udevadm_pid=52860 00:03:02.420 19:40:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:02.420 19:40:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:02.420 19:40:56 -- pm/common@17 -- # local monitor 00:03:02.420 19:40:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.420 19:40:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.420 19:40:56 -- pm/common@25 -- # sleep 1 00:03:02.420 19:40:56 -- pm/common@21 -- # date +%s 00:03:02.420 19:40:56 -- pm/common@21 -- # date +%s 00:03:02.420 19:40:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721072456 00:03:02.420 19:40:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721072456 00:03:02.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721072456_collect-vmstat.pm.log 00:03:02.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721072456_collect-cpu-load.pm.log 00:03:03.355 19:40:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:03.355 19:40:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:03.355 19:40:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:03.355 19:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:03.355 19:40:57 -- spdk/autotest.sh@59 -- # create_test_list 00:03:03.355 19:40:57 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:03.355 19:40:57 -- common/autotest_common.sh@10 -- # set +x 00:03:03.355 19:40:57 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:03.355 19:40:57 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:03.355 19:40:57 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:03.355 19:40:57 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:03.355 19:40:57 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:03.355 19:40:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:03.355 19:40:57 -- common/autotest_common.sh@1455 -- # uname 00:03:03.355 19:40:57 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:03.355 19:40:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:03.355 19:40:57 -- common/autotest_common.sh@1475 -- # uname 00:03:03.355 19:40:57 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:03.355 19:40:57 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:03.355 19:40:57 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:03.355 19:40:57 -- spdk/autotest.sh@72 -- # hash lcov 00:03:03.355 19:40:57 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:03.355 19:40:57 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:03.355 --rc lcov_branch_coverage=1 00:03:03.355 --rc lcov_function_coverage=1 00:03:03.355 --rc genhtml_branch_coverage=1 00:03:03.355 --rc genhtml_function_coverage=1 00:03:03.355 --rc genhtml_legend=1 00:03:03.355 --rc geninfo_all_blocks=1 00:03:03.356 ' 00:03:03.356 19:40:57 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:03.356 --rc lcov_branch_coverage=1 00:03:03.356 --rc lcov_function_coverage=1 00:03:03.356 --rc genhtml_branch_coverage=1 00:03:03.356 --rc genhtml_function_coverage=1 00:03:03.356 --rc genhtml_legend=1 00:03:03.356 --rc geninfo_all_blocks=1 00:03:03.356 ' 00:03:03.356 19:40:57 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:03.356 --rc lcov_branch_coverage=1 00:03:03.356 --rc lcov_function_coverage=1 00:03:03.356 --rc genhtml_branch_coverage=1 00:03:03.356 --rc genhtml_function_coverage=1 00:03:03.356 --rc genhtml_legend=1 00:03:03.356 --rc geninfo_all_blocks=1 00:03:03.356 --no-external' 00:03:03.356 19:40:57 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:03.356 --rc lcov_branch_coverage=1 00:03:03.356 --rc lcov_function_coverage=1 00:03:03.356 --rc genhtml_branch_coverage=1 00:03:03.356 --rc genhtml_function_coverage=1 00:03:03.356 --rc genhtml_legend=1 00:03:03.356 --rc geninfo_all_blocks=1 00:03:03.356 --no-external' 00:03:03.356 19:40:57 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:03.356 lcov: LCOV version 1.14 00:03:03.356 19:40:57 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:21.442 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:21.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:33.641 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:33.642 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:33.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:36.178 19:41:29 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:36.178 19:41:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:36.178 19:41:29 -- common/autotest_common.sh@10 -- # set +x 00:03:36.178 19:41:29 -- spdk/autotest.sh@91 -- # rm -f 00:03:36.178 19:41:29 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:36.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.437 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:36.437 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:36.437 19:41:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:36.437 19:41:30 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:36.437 19:41:30 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:36.437 19:41:30 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:36.437 19:41:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.437 19:41:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:36.437 19:41:30 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:36.437 19:41:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.437 19:41:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.437 19:41:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.437 19:41:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:36.437 19:41:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:36.437 19:41:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:36.437 19:41:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.437 19:41:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.437 19:41:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:36.437 19:41:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:36.437 19:41:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:36.437 19:41:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.437 19:41:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.437 19:41:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:36.437 19:41:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:36.437 19:41:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:36.437 19:41:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.437 19:41:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:36.437 19:41:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.437 19:41:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.437 19:41:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:36.437 19:41:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:36.437 19:41:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:36.437 No valid GPT data, bailing 00:03:36.437 19:41:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.437 19:41:30 -- scripts/common.sh@391 -- # pt= 00:03:36.437 19:41:30 -- scripts/common.sh@392 -- # return 1 00:03:36.437 19:41:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:36.437 1+0 records in 00:03:36.437 1+0 records out 00:03:36.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420228 s, 250 MB/s 00:03:36.437 19:41:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.437 19:41:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.437 19:41:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:36.437 19:41:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:36.437 19:41:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:36.696 No valid GPT data, bailing 00:03:36.696 19:41:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:36.696 19:41:30 -- scripts/common.sh@391 -- # pt= 00:03:36.696 19:41:30 -- scripts/common.sh@392 -- # return 1 00:03:36.696 19:41:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:36.696 1+0 records in 00:03:36.697 1+0 records out 00:03:36.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492898 s, 213 MB/s 00:03:36.697 19:41:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.697 19:41:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.697 19:41:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:36.697 19:41:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:36.697 19:41:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:36.697 No valid GPT data, bailing 00:03:36.697 19:41:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:36.697 19:41:30 -- scripts/common.sh@391 -- # pt= 00:03:36.697 19:41:30 -- scripts/common.sh@392 -- # return 1 00:03:36.697 19:41:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:36.697 1+0 records in 00:03:36.697 1+0 records out 00:03:36.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474357 s, 221 MB/s 00:03:36.697 19:41:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.697 19:41:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.697 19:41:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:36.697 19:41:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:36.697 19:41:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:36.697 No valid GPT data, bailing 00:03:36.697 19:41:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:36.697 19:41:30 -- scripts/common.sh@391 -- # pt= 00:03:36.697 19:41:30 -- scripts/common.sh@392 -- # return 1 00:03:36.697 19:41:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:36.697 1+0 records in 00:03:36.697 1+0 records out 00:03:36.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421408 s, 249 MB/s 00:03:36.697 19:41:30 -- spdk/autotest.sh@118 -- # sync 00:03:36.955 19:41:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:36.955 19:41:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:36.955 19:41:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:38.329 19:41:32 -- spdk/autotest.sh@124 -- # uname -s 00:03:38.329 19:41:32 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:38.329 19:41:32 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:38.329 19:41:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.329 19:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.329 19:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:38.329 ************************************ 00:03:38.329 START TEST setup.sh 00:03:38.329 ************************************ 00:03:38.329 19:41:32 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:38.586 * Looking for test storage... 00:03:38.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:38.586 19:41:32 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:38.586 19:41:32 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:38.586 19:41:32 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:38.586 19:41:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.586 19:41:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.586 19:41:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.586 ************************************ 00:03:38.586 START TEST acl 00:03:38.586 ************************************ 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:38.586 * Looking for test storage... 00:03:38.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:38.586 19:41:32 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:38.586 19:41:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.586 19:41:32 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:38.586 19:41:32 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:38.586 19:41:32 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:38.586 19:41:32 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:38.587 19:41:32 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:38.587 19:41:32 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.587 19:41:32 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.519 19:41:33 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:39.519 19:41:33 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:39.519 19:41:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.519 19:41:33 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:39.519 19:41:33 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.519 19:41:33 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.084 Hugepages 00:03:40.084 node hugesize free / total 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.084 00:03:40.084 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:40.084 19:41:34 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:40.084 19:41:34 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.084 19:41:34 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.084 19:41:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.084 ************************************ 00:03:40.084 START TEST denied 00:03:40.084 ************************************ 00:03:40.084 19:41:34 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:40.084 19:41:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:40.084 19:41:34 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:40.084 19:41:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:40.084 19:41:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.084 19:41:34 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:41.018 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.018 19:41:35 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.583 00:03:41.583 real 0m1.381s 00:03:41.583 user 0m0.555s 00:03:41.583 sys 0m0.774s 00:03:41.583 ************************************ 00:03:41.583 END TEST denied 00:03:41.583 ************************************ 00:03:41.583 19:41:35 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.583 19:41:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:41.583 19:41:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:41.583 19:41:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:41.583 19:41:35 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.583 19:41:35 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.583 19:41:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.583 ************************************ 00:03:41.583 START TEST allowed 00:03:41.583 ************************************ 00:03:41.583 19:41:35 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:41.583 19:41:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:41.583 19:41:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:41.583 19:41:35 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:41.583 19:41:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.583 19:41:35 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:42.532 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.532 19:41:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.099 00:03:43.099 real 0m1.442s 00:03:43.099 user 0m0.627s 00:03:43.099 sys 0m0.798s 00:03:43.099 19:41:37 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.099 ************************************ 00:03:43.099 END TEST allowed 00:03:43.099 ************************************ 00:03:43.099 19:41:37 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:43.099 19:41:37 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:43.099 ************************************ 00:03:43.099 END TEST acl 00:03:43.099 ************************************ 00:03:43.099 00:03:43.099 real 0m4.548s 00:03:43.099 user 0m1.983s 00:03:43.099 sys 0m2.500s 00:03:43.099 19:41:37 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.099 19:41:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:43.099 19:41:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:43.099 19:41:37 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:43.099 19:41:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.099 19:41:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.099 19:41:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.099 ************************************ 00:03:43.099 START TEST hugepages 00:03:43.099 ************************************ 00:03:43.099 19:41:37 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:43.099 * Looking for test storage... 00:03:43.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6014200 kB' 'MemAvailable: 7394096 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 435964 kB' 'Inactive: 1265212 kB' 'Active(anon): 115104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 105964 kB' 'Mapped: 48600 kB' 'Shmem: 10488 kB' 'KReclaimable: 61528 kB' 'Slab: 132848 kB' 'SReclaimable: 61528 kB' 'SUnreclaim: 71320 kB' 'KernelStack: 6284 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 337408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.099 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.358 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.358 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.359 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.360 19:41:37 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:43.360 19:41:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.360 19:41:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.360 19:41:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.360 ************************************ 00:03:43.360 START TEST default_setup 00:03:43.360 ************************************ 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.360 19:41:37 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.928 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.240 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8110396 kB' 'MemAvailable: 9490156 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 453092 kB' 'Inactive: 1265228 kB' 'Active(anon): 132232 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123068 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132652 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71428 kB' 'KernelStack: 6288 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.240 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.241 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8110176 kB' 'MemAvailable: 9489936 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 453264 kB' 'Inactive: 1265228 kB' 'Active(anon): 132404 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122984 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132644 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71420 kB' 'KernelStack: 6256 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.242 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.243 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8110176 kB' 'MemAvailable: 9489936 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452728 kB' 'Inactive: 1265228 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132632 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71408 kB' 'KernelStack: 6240 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.244 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.245 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:44.245 nr_hugepages=1024 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.246 resv_hugepages=0 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.246 surplus_hugepages=0 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.246 anon_hugepages=0 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8110612 kB' 'MemAvailable: 9490372 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452720 kB' 'Inactive: 1265228 kB' 'Active(anon): 131860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122976 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132632 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71408 kB' 'KernelStack: 6240 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.246 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8110612 kB' 'MemUsed: 4131368 kB' 'SwapCached: 0 kB' 'Active: 452652 kB' 'Inactive: 1265228 kB' 'Active(anon): 131792 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1596556 kB' 'Mapped: 48612 kB' 'AnonPages: 122900 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61224 kB' 'Slab: 132632 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.247 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.248 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.249 node0=1024 expecting 1024 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.249 00:03:44.249 real 0m0.978s 00:03:44.249 user 0m0.444s 00:03:44.249 sys 0m0.455s 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.249 19:41:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:44.249 ************************************ 00:03:44.249 END TEST default_setup 00:03:44.249 ************************************ 00:03:44.249 19:41:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.249 19:41:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:44.249 19:41:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.249 19:41:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.249 19:41:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.249 ************************************ 00:03:44.249 START TEST per_node_1G_alloc 00:03:44.249 ************************************ 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.249 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.823 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.823 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9163664 kB' 'MemAvailable: 10543428 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452900 kB' 'Inactive: 1265232 kB' 'Active(anon): 132040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123120 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132604 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71380 kB' 'KernelStack: 6280 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.823 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9163640 kB' 'MemAvailable: 10543404 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452932 kB' 'Inactive: 1265232 kB' 'Active(anon): 132072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123144 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132600 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71376 kB' 'KernelStack: 6248 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.824 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.825 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9163640 kB' 'MemAvailable: 10543404 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452548 kB' 'Inactive: 1265232 kB' 'Active(anon): 131688 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132656 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71432 kB' 'KernelStack: 6240 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.826 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.827 nr_hugepages=512 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:44.827 resv_hugepages=0 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.827 surplus_hugepages=0 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.827 anon_hugepages=0 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.827 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9163640 kB' 'MemAvailable: 10543404 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452548 kB' 'Inactive: 1265232 kB' 'Active(anon): 131688 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132656 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71432 kB' 'KernelStack: 6240 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.828 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9163640 kB' 'MemUsed: 3078340 kB' 'SwapCached: 0 kB' 'Active: 452808 kB' 'Inactive: 1265232 kB' 'Active(anon): 131948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1596556 kB' 'Mapped: 48612 kB' 'AnonPages: 123052 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61224 kB' 'Slab: 132656 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.829 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.830 node0=512 expecting 512 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:44.830 00:03:44.830 real 0m0.484s 00:03:44.830 user 0m0.244s 00:03:44.830 sys 0m0.272s 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.830 19:41:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.830 ************************************ 00:03:44.830 END TEST per_node_1G_alloc 00:03:44.830 ************************************ 00:03:44.830 19:41:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.830 19:41:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:44.830 19:41:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.830 19:41:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.830 19:41:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.830 ************************************ 00:03:44.830 START TEST even_2G_alloc 00:03:44.830 ************************************ 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.830 19:41:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.088 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.088 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8112796 kB' 'MemAvailable: 9492560 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452940 kB' 'Inactive: 1265232 kB' 'Active(anon): 132080 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123220 kB' 'Mapped: 48780 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132628 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71404 kB' 'KernelStack: 6180 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8113144 kB' 'MemAvailable: 9492908 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452480 kB' 'Inactive: 1265232 kB' 'Active(anon): 131620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123024 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132636 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71412 kB' 'KernelStack: 6224 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8113464 kB' 'MemAvailable: 9493228 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452416 kB' 'Inactive: 1265232 kB' 'Active(anon): 131556 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122984 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132640 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71416 kB' 'KernelStack: 6256 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.356 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.357 nr_hugepages=1024 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.357 resv_hugepages=0 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.357 surplus_hugepages=0 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.357 anon_hugepages=0 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8115968 kB' 'MemAvailable: 9495732 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 452388 kB' 'Inactive: 1265232 kB' 'Active(anon): 131528 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122960 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132640 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71416 kB' 'KernelStack: 6240 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.358 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8116344 kB' 'MemUsed: 4125636 kB' 'SwapCached: 0 kB' 'Active: 452668 kB' 'Inactive: 1265232 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1596556 kB' 'Mapped: 48612 kB' 'AnonPages: 122944 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61224 kB' 'Slab: 132640 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.360 node0=1024 expecting 1024 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.360 00:03:45.360 real 0m0.518s 00:03:45.360 user 0m0.272s 00:03:45.360 sys 0m0.279s 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.360 19:41:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.360 ************************************ 00:03:45.360 END TEST even_2G_alloc 00:03:45.360 ************************************ 00:03:45.360 19:41:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:45.360 19:41:39 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:45.360 19:41:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.360 19:41:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.360 19:41:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.360 ************************************ 00:03:45.360 START TEST odd_alloc 00:03:45.360 ************************************ 00:03:45.360 19:41:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:45.360 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:45.360 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:45.360 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.360 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.360 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:45.360 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.361 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.933 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.933 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8122784 kB' 'MemAvailable: 9502552 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452824 kB' 'Inactive: 1265236 kB' 'Active(anon): 131964 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123080 kB' 'Mapped: 48708 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132668 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71444 kB' 'KernelStack: 6244 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.933 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8122532 kB' 'MemAvailable: 9502300 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452824 kB' 'Inactive: 1265236 kB' 'Active(anon): 131964 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123036 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132676 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71452 kB' 'KernelStack: 6212 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.935 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.936 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8122532 kB' 'MemAvailable: 9502300 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452468 kB' 'Inactive: 1265236 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123004 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132680 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71456 kB' 'KernelStack: 6256 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.938 nr_hugepages=1025 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:45.938 resv_hugepages=0 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.938 surplus_hugepages=0 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.938 anon_hugepages=0 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8122532 kB' 'MemAvailable: 9502300 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452484 kB' 'Inactive: 1265236 kB' 'Active(anon): 131624 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122988 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132676 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71452 kB' 'KernelStack: 6224 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8123188 kB' 'MemUsed: 4118792 kB' 'SwapCached: 0 kB' 'Active: 452496 kB' 'Inactive: 1265236 kB' 'Active(anon): 131636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1596560 kB' 'Mapped: 48612 kB' 'AnonPages: 123004 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61224 kB' 'Slab: 132668 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.941 node0=1025 expecting 1025 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:45.941 00:03:45.941 real 0m0.528s 00:03:45.941 user 0m0.296s 00:03:45.941 sys 0m0.266s 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.941 19:41:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.941 ************************************ 00:03:45.941 END TEST odd_alloc 00:03:45.941 ************************************ 00:03:45.941 19:41:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:45.941 19:41:40 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:45.941 19:41:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.941 19:41:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.941 19:41:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.941 ************************************ 00:03:45.941 START TEST custom_alloc 00:03:45.941 ************************************ 00:03:45.941 19:41:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:45.941 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.942 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.466 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.466 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9175372 kB' 'MemAvailable: 10555140 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452680 kB' 'Inactive: 1265236 kB' 'Active(anon): 131820 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122948 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132628 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71404 kB' 'KernelStack: 6208 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9176068 kB' 'MemAvailable: 10555836 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452640 kB' 'Inactive: 1265236 kB' 'Active(anon): 131780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123168 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132608 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71384 kB' 'KernelStack: 6176 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.468 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9176068 kB' 'MemAvailable: 10555836 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452484 kB' 'Inactive: 1265236 kB' 'Active(anon): 131624 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123004 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132568 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71344 kB' 'KernelStack: 6256 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.470 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.471 nr_hugepages=512 00:03:46.471 resv_hugepages=0 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.471 surplus_hugepages=0 00:03:46.471 anon_hugepages=0 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9176068 kB' 'MemAvailable: 10555836 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452452 kB' 'Inactive: 1265236 kB' 'Active(anon): 131592 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123012 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132568 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71344 kB' 'KernelStack: 6256 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9177004 kB' 'MemUsed: 3064976 kB' 'SwapCached: 0 kB' 'Active: 452760 kB' 'Inactive: 1265236 kB' 'Active(anon): 131900 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1596560 kB' 'Mapped: 48612 kB' 'AnonPages: 123056 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61224 kB' 'Slab: 132560 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.473 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.474 node0=512 expecting 512 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:46.474 00:03:46.474 real 0m0.520s 00:03:46.474 user 0m0.269s 00:03:46.474 sys 0m0.259s 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.474 19:41:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.474 ************************************ 00:03:46.474 END TEST custom_alloc 00:03:46.474 ************************************ 00:03:46.474 19:41:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:46.474 19:41:40 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:46.474 19:41:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.474 19:41:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.474 19:41:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.474 ************************************ 00:03:46.474 START TEST no_shrink_alloc 00:03:46.474 ************************************ 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.474 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.475 19:41:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.048 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.048 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8129272 kB' 'MemAvailable: 9509040 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 453344 kB' 'Inactive: 1265236 kB' 'Active(anon): 132484 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123352 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132568 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71344 kB' 'KernelStack: 6260 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.048 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8129524 kB' 'MemAvailable: 9509292 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452940 kB' 'Inactive: 1265236 kB' 'Active(anon): 132080 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123188 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132564 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6240 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.049 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.050 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8129524 kB' 'MemAvailable: 9509292 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452756 kB' 'Inactive: 1265236 kB' 'Active(anon): 131896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123040 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132564 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6256 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.051 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.052 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.053 nr_hugepages=1024 00:03:47.053 resv_hugepages=0 00:03:47.053 surplus_hugepages=0 00:03:47.053 anon_hugepages=0 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8129524 kB' 'MemAvailable: 9509292 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452532 kB' 'Inactive: 1265236 kB' 'Active(anon): 131672 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123040 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132564 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6256 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.053 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8129776 kB' 'MemUsed: 4112204 kB' 'SwapCached: 0 kB' 'Active: 452716 kB' 'Inactive: 1265236 kB' 'Active(anon): 131856 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1596560 kB' 'Mapped: 48632 kB' 'AnonPages: 123012 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61224 kB' 'Slab: 132556 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.056 node0=1024 expecting 1024 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.056 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.580 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.580 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.580 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8129904 kB' 'MemAvailable: 9509672 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 453904 kB' 'Inactive: 1265236 kB' 'Active(anon): 133044 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 124016 kB' 'Mapped: 48936 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132516 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71292 kB' 'KernelStack: 6372 kB' 'PageTables: 4640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8130156 kB' 'MemAvailable: 9509924 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452956 kB' 'Inactive: 1265236 kB' 'Active(anon): 132096 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123236 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132564 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71340 kB' 'KernelStack: 6320 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8130596 kB' 'MemAvailable: 9510364 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452944 kB' 'Inactive: 1265236 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123228 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132548 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71324 kB' 'KernelStack: 6320 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.585 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.586 nr_hugepages=1024 00:03:47.586 resv_hugepages=0 00:03:47.586 surplus_hugepages=0 00:03:47.586 anon_hugepages=0 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8130856 kB' 'MemAvailable: 9510624 kB' 'Buffers: 2436 kB' 'Cached: 1594124 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 1265236 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123220 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 61224 kB' 'Slab: 132548 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71324 kB' 'KernelStack: 6252 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.587 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8131636 kB' 'MemUsed: 4110344 kB' 'SwapCached: 0 kB' 'Active: 452960 kB' 'Inactive: 1265236 kB' 'Active(anon): 132100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1596560 kB' 'Mapped: 48808 kB' 'AnonPages: 123040 kB' 'Shmem: 10464 kB' 'KernelStack: 6268 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61224 kB' 'Slab: 132548 kB' 'SReclaimable: 61224 kB' 'SUnreclaim: 71324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.589 node0=1024 expecting 1024 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.589 00:03:47.589 real 0m1.052s 00:03:47.589 user 0m0.525s 00:03:47.589 sys 0m0.533s 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.589 19:41:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.589 ************************************ 00:03:47.589 END TEST no_shrink_alloc 00:03:47.589 ************************************ 00:03:47.589 19:41:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.589 19:41:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:47.589 19:41:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:47.589 19:41:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.589 19:41:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.589 19:41:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:47.589 19:41:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.589 19:41:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:47.589 19:41:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:47.589 19:41:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:47.589 00:03:47.589 real 0m4.547s 00:03:47.589 user 0m2.218s 00:03:47.589 sys 0m2.322s 00:03:47.589 ************************************ 00:03:47.589 END TEST hugepages 00:03:47.589 ************************************ 00:03:47.589 19:41:41 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.589 19:41:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.847 19:41:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:47.848 19:41:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:47.848 19:41:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.848 19:41:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.848 19:41:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:47.848 ************************************ 00:03:47.848 START TEST driver 00:03:47.848 ************************************ 00:03:47.848 19:41:41 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:47.848 * Looking for test storage... 00:03:47.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:47.848 19:41:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:47.848 19:41:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.848 19:41:41 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.415 19:41:42 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:48.415 19:41:42 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.415 19:41:42 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.415 19:41:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.415 ************************************ 00:03:48.415 START TEST guess_driver 00:03:48.415 ************************************ 00:03:48.415 19:41:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:48.415 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:48.415 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:48.415 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:48.415 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:48.415 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:48.415 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:48.416 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:48.416 Looking for driver=uio_pci_generic 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.416 19:41:42 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:48.983 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:48.983 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:48.983 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.983 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.983 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:48.983 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.249 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.249 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:49.249 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.249 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:49.249 19:41:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:49.249 19:41:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.249 19:41:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.816 00:03:49.816 real 0m1.396s 00:03:49.816 user 0m0.521s 00:03:49.816 sys 0m0.861s 00:03:49.816 19:41:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.816 ************************************ 00:03:49.816 END TEST guess_driver 00:03:49.816 ************************************ 00:03:49.816 19:41:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.816 19:41:43 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:49.816 00:03:49.816 real 0m2.067s 00:03:49.816 user 0m0.758s 00:03:49.816 sys 0m1.343s 00:03:49.816 19:41:43 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.816 19:41:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.816 ************************************ 00:03:49.816 END TEST driver 00:03:49.816 ************************************ 00:03:49.816 19:41:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:49.816 19:41:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:49.816 19:41:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.816 19:41:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.816 19:41:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.816 ************************************ 00:03:49.816 START TEST devices 00:03:49.816 ************************************ 00:03:49.816 19:41:43 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:49.816 * Looking for test storage... 00:03:49.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:49.816 19:41:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:49.816 19:41:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:49.816 19:41:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.816 19:41:44 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.753 19:41:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:50.753 19:41:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:50.754 19:41:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:50.754 No valid GPT data, bailing 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.754 19:41:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.754 19:41:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.754 19:41:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:50.754 No valid GPT data, bailing 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:50.754 19:41:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:50.754 19:41:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:50.754 19:41:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:50.754 No valid GPT data, bailing 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:50.754 19:41:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:50.754 19:41:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:50.754 19:41:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:50.754 19:41:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:50.754 19:41:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:51.013 No valid GPT data, bailing 00:03:51.013 19:41:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:51.013 19:41:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.013 19:41:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.013 19:41:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:51.013 19:41:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:51.013 19:41:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:51.013 19:41:45 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:51.013 19:41:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:51.013 19:41:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.013 19:41:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:51.013 19:41:45 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:51.013 19:41:45 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:51.013 19:41:45 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:51.013 19:41:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.013 19:41:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.013 19:41:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:51.013 ************************************ 00:03:51.013 START TEST nvme_mount 00:03:51.013 ************************************ 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:51.013 19:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.951 Creating new GPT entries in memory. 00:03:51.951 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.951 other utilities. 00:03:51.951 19:41:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.951 19:41:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.951 19:41:46 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.951 19:41:46 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.951 19:41:46 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:52.888 Creating new GPT entries in memory. 00:03:52.888 The operation has completed successfully. 00:03:52.888 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.888 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.888 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57051 00:03:52.888 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.888 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:52.888 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.888 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:52.888 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.147 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:53.407 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.407 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.666 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.666 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.666 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.666 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.666 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:53.666 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:53.666 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.666 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:53.666 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:53.924 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.924 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.924 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.924 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:53.924 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.924 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.924 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.924 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.925 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.925 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.925 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.925 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.925 19:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.925 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.925 19:41:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.925 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.925 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.925 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.925 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.925 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.925 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.182 19:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.440 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.440 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:54.440 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.440 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.440 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.440 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.699 19:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.971 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.971 00:03:54.971 real 0m3.913s 00:03:54.971 user 0m0.655s 00:03:54.971 sys 0m1.007s 00:03:54.971 19:41:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.971 19:41:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:54.971 ************************************ 00:03:54.971 END TEST nvme_mount 00:03:54.971 ************************************ 00:03:54.971 19:41:48 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:54.971 19:41:48 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:54.971 19:41:49 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.971 19:41:49 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.971 19:41:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:54.971 ************************************ 00:03:54.971 START TEST dm_mount 00:03:54.971 ************************************ 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:54.971 19:41:49 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:55.907 Creating new GPT entries in memory. 00:03:55.907 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:55.907 other utilities. 00:03:55.907 19:41:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:55.907 19:41:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.907 19:41:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:55.907 19:41:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:55.907 19:41:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:56.842 Creating new GPT entries in memory. 00:03:56.842 The operation has completed successfully. 00:03:56.842 19:41:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:56.842 19:41:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.842 19:41:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.842 19:41:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.842 19:41:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:58.217 The operation has completed successfully. 00:03:58.217 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.217 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.217 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57479 00:03:58.217 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:58.217 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.217 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.218 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.476 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.477 19:41:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.735 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.735 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:58.735 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.735 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.735 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.735 19:41:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:58.994 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:58.994 00:03:58.994 real 0m4.211s 00:03:58.994 user 0m0.455s 00:03:58.994 sys 0m0.714s 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.994 19:41:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:58.994 ************************************ 00:03:58.994 END TEST dm_mount 00:03:58.994 ************************************ 00:03:59.252 19:41:53 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:59.252 19:41:53 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:59.252 19:41:53 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:59.252 19:41:53 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.252 19:41:53 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.252 19:41:53 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:59.252 19:41:53 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.252 19:41:53 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.510 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:59.510 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:59.510 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:59.510 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:59.510 19:41:53 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:59.510 19:41:53 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.510 19:41:53 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:59.510 19:41:53 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.510 19:41:53 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:59.510 19:41:53 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.510 19:41:53 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:59.510 00:03:59.510 real 0m9.608s 00:03:59.510 user 0m1.715s 00:03:59.510 sys 0m2.312s 00:03:59.510 19:41:53 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.510 19:41:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.510 ************************************ 00:03:59.510 END TEST devices 00:03:59.510 ************************************ 00:03:59.510 19:41:53 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:59.510 00:03:59.510 real 0m21.026s 00:03:59.510 user 0m6.760s 00:03:59.510 sys 0m8.637s 00:03:59.510 19:41:53 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.510 19:41:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.510 ************************************ 00:03:59.510 END TEST setup.sh 00:03:59.510 ************************************ 00:03:59.510 19:41:53 -- common/autotest_common.sh@1142 -- # return 0 00:03:59.510 19:41:53 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.077 Hugepages 00:04:00.077 node hugesize free / total 00:04:00.077 node0 1048576kB 0 / 0 00:04:00.077 node0 2048kB 2048 / 2048 00:04:00.077 00:04:00.077 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.334 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:00.334 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:00.334 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:00.334 19:41:54 -- spdk/autotest.sh@130 -- # uname -s 00:04:00.334 19:41:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:00.334 19:41:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:00.334 19:41:54 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.156 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.156 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.156 19:41:55 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:02.091 19:41:56 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:02.091 19:41:56 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:02.091 19:41:56 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.091 19:41:56 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:02.091 19:41:56 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:02.091 19:41:56 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:02.091 19:41:56 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.091 19:41:56 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:02.091 19:41:56 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:02.349 19:41:56 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:02.349 19:41:56 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.349 19:41:56 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.608 Waiting for block devices as requested 00:04:02.608 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.867 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.867 19:41:56 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:02.867 19:41:56 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:02.867 19:41:56 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:02.867 19:41:56 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.867 19:41:56 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.867 19:41:56 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:02.867 19:41:56 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.867 19:41:56 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:02.867 19:41:56 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:02.867 19:41:56 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:02.867 19:41:56 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:02.867 19:41:56 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:02.867 19:41:56 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:02.867 19:41:56 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:02.867 19:41:56 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:02.867 19:41:56 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:02.867 19:41:56 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:02.867 19:41:56 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:02.867 19:41:56 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:02.867 19:41:56 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:02.867 19:41:56 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:02.867 19:41:56 -- common/autotest_common.sh@1557 -- # continue 00:04:02.867 19:41:56 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:02.867 19:41:56 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:02.867 19:41:56 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.867 19:41:56 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:02.867 19:41:56 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.867 19:41:56 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:02.867 19:41:56 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.867 19:41:56 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:02.867 19:41:56 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:02.867 19:41:56 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:02.867 19:41:56 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:02.867 19:41:56 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:02.867 19:41:56 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:02.867 19:41:56 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:02.867 19:41:56 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:02.867 19:41:56 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:02.867 19:41:56 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:02.867 19:41:56 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:02.867 19:41:56 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:02.867 19:41:56 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:02.867 19:41:56 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:02.867 19:41:56 -- common/autotest_common.sh@1557 -- # continue 00:04:02.867 19:41:56 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:02.867 19:41:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:02.867 19:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:02.867 19:41:57 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:02.867 19:41:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:02.867 19:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:02.867 19:41:57 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.700 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.700 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.700 19:41:57 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:03.700 19:41:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:03.700 19:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:03.700 19:41:57 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:03.700 19:41:57 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:03.700 19:41:57 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.700 19:41:57 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:03.700 19:41:57 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:03.700 19:41:57 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:03.700 19:41:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:03.700 19:41:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:03.700 19:41:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.700 19:41:57 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.700 19:41:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:03.974 19:41:57 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:03.974 19:41:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.974 19:41:57 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:03.974 19:41:57 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:03.974 19:41:57 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:03.974 19:41:57 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.974 19:41:57 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:03.974 19:41:57 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:03.974 19:41:57 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:03.974 19:41:57 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.974 19:41:57 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:03.974 19:41:57 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:03.974 19:41:57 -- common/autotest_common.sh@1593 -- # return 0 00:04:03.974 19:41:57 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:03.974 19:41:57 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:03.974 19:41:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:03.974 19:41:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:03.974 19:41:57 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:03.974 19:41:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.974 19:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:03.974 19:41:57 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:03.974 19:41:57 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:03.974 19:41:57 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:03.974 19:41:57 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.975 19:41:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.975 19:41:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.975 19:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:03.975 ************************************ 00:04:03.975 START TEST env 00:04:03.975 ************************************ 00:04:03.975 19:41:58 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.975 * Looking for test storage... 00:04:03.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:03.975 19:41:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.975 19:41:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.975 19:41:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.975 19:41:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.975 ************************************ 00:04:03.975 START TEST env_memory 00:04:03.975 ************************************ 00:04:03.975 19:41:58 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.975 00:04:03.975 00:04:03.975 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.975 http://cunit.sourceforge.net/ 00:04:03.975 00:04:03.975 00:04:03.975 Suite: memory 00:04:03.975 Test: alloc and free memory map ...[2024-07-15 19:41:58.143817] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.975 passed 00:04:03.975 Test: mem map translation ...[2024-07-15 19:41:58.175061] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.975 [2024-07-15 19:41:58.175109] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.975 [2024-07-15 19:41:58.175168] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.975 [2024-07-15 19:41:58.175179] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.233 passed 00:04:04.234 Test: mem map registration ...[2024-07-15 19:41:58.239159] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:04.234 [2024-07-15 19:41:58.239197] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:04.234 passed 00:04:04.234 Test: mem map adjacent registrations ...passed 00:04:04.234 00:04:04.234 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.234 suites 1 1 n/a 0 0 00:04:04.234 tests 4 4 4 0 0 00:04:04.234 asserts 152 152 152 0 n/a 00:04:04.234 00:04:04.234 Elapsed time = 0.214 seconds 00:04:04.234 00:04:04.234 real 0m0.228s 00:04:04.234 user 0m0.212s 00:04:04.234 sys 0m0.014s 00:04:04.234 19:41:58 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.234 19:41:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.234 ************************************ 00:04:04.234 END TEST env_memory 00:04:04.234 ************************************ 00:04:04.234 19:41:58 env -- common/autotest_common.sh@1142 -- # return 0 00:04:04.234 19:41:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.234 19:41:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.234 19:41:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.234 19:41:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.234 ************************************ 00:04:04.234 START TEST env_vtophys 00:04:04.234 ************************************ 00:04:04.234 19:41:58 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.234 EAL: lib.eal log level changed from notice to debug 00:04:04.234 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.234 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.234 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.234 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.234 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.234 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.234 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.234 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.234 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.234 EAL: Detected lcore 9 as core 0 on socket 0 00:04:04.234 EAL: Maximum logical cores by configuration: 128 00:04:04.234 EAL: Detected CPU lcores: 10 00:04:04.234 EAL: Detected NUMA nodes: 1 00:04:04.234 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.234 EAL: Detected shared linkage of DPDK 00:04:04.234 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.234 EAL: Selected IOVA mode 'PA' 00:04:04.234 EAL: Probing VFIO support... 00:04:04.234 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.234 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:04.234 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.234 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.234 EAL: Setting up physically contiguous memory... 00:04:04.234 EAL: Setting maximum number of open files to 524288 00:04:04.234 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.234 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.234 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.234 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.234 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.234 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.234 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.234 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.234 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.234 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.234 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.234 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.234 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.234 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.234 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.234 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.234 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.234 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.234 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.234 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.234 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.234 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.234 EAL: Hugepages will be freed exactly as allocated. 00:04:04.234 EAL: No shared files mode enabled, IPC is disabled 00:04:04.234 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: TSC frequency is ~2200000 KHz 00:04:04.493 EAL: Main lcore 0 is ready (tid=7ff05f484a00;cpuset=[0]) 00:04:04.493 EAL: Trying to obtain current memory policy. 00:04:04.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.493 EAL: Restoring previous memory policy: 0 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.493 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.493 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.493 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.493 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:04.493 00:04:04.493 00:04:04.493 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.493 http://cunit.sourceforge.net/ 00:04:04.493 00:04:04.493 00:04:04.493 Suite: components_suite 00:04:04.493 Test: vtophys_malloc_test ...passed 00:04:04.493 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.493 EAL: Restoring previous memory policy: 4 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.493 EAL: Trying to obtain current memory policy. 00:04:04.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.493 EAL: Restoring previous memory policy: 4 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.493 EAL: Trying to obtain current memory policy. 00:04:04.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.493 EAL: Restoring previous memory policy: 4 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.493 EAL: Trying to obtain current memory policy. 00:04:04.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.493 EAL: Restoring previous memory policy: 4 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.493 EAL: Trying to obtain current memory policy. 00:04:04.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.493 EAL: Restoring previous memory policy: 4 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.493 EAL: Trying to obtain current memory policy. 00:04:04.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.493 EAL: Restoring previous memory policy: 4 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.493 EAL: Trying to obtain current memory policy. 00:04:04.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.493 EAL: Restoring previous memory policy: 4 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.493 EAL: Trying to obtain current memory policy. 00:04:04.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.752 EAL: Restoring previous memory policy: 4 00:04:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.752 EAL: request: mp_malloc_sync 00:04:04.752 EAL: No shared files mode enabled, IPC is disabled 00:04:04.752 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.752 EAL: request: mp_malloc_sync 00:04:04.752 EAL: No shared files mode enabled, IPC is disabled 00:04:04.752 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.752 EAL: Trying to obtain current memory policy. 00:04:04.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.752 EAL: Restoring previous memory policy: 4 00:04:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.752 EAL: request: mp_malloc_sync 00:04:04.752 EAL: No shared files mode enabled, IPC is disabled 00:04:04.752 EAL: Heap on socket 0 was expanded by 514MB 00:04:05.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.011 EAL: request: mp_malloc_sync 00:04:05.011 EAL: No shared files mode enabled, IPC is disabled 00:04:05.011 EAL: Heap on socket 0 was shrunk by 514MB 00:04:05.011 EAL: Trying to obtain current memory policy. 00:04:05.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.269 EAL: Restoring previous memory policy: 4 00:04:05.269 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.269 EAL: request: mp_malloc_sync 00:04:05.269 EAL: No shared files mode enabled, IPC is disabled 00:04:05.269 EAL: Heap on socket 0 was expanded by 1026MB 00:04:05.527 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.785 passed 00:04:05.785 00:04:05.785 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.785 suites 1 1 n/a 0 0 00:04:05.785 tests 2 2 2 0 0 00:04:05.785 asserts 5218 5218 5218 0 n/a 00:04:05.785 00:04:05.785 Elapsed time = 1.257 seconds 00:04:05.785 EAL: request: mp_malloc_sync 00:04:05.785 EAL: No shared files mode enabled, IPC is disabled 00:04:05.785 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:05.785 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.785 EAL: request: mp_malloc_sync 00:04:05.785 EAL: No shared files mode enabled, IPC is disabled 00:04:05.785 EAL: Heap on socket 0 was shrunk by 2MB 00:04:05.785 EAL: No shared files mode enabled, IPC is disabled 00:04:05.785 EAL: No shared files mode enabled, IPC is disabled 00:04:05.785 EAL: No shared files mode enabled, IPC is disabled 00:04:05.785 00:04:05.785 real 0m1.459s 00:04:05.785 user 0m0.806s 00:04:05.785 sys 0m0.515s 00:04:05.785 19:41:59 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.785 19:41:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:05.785 ************************************ 00:04:05.785 END TEST env_vtophys 00:04:05.785 ************************************ 00:04:05.785 19:41:59 env -- common/autotest_common.sh@1142 -- # return 0 00:04:05.785 19:41:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.785 19:41:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.785 19:41:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.785 19:41:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.785 ************************************ 00:04:05.785 START TEST env_pci 00:04:05.785 ************************************ 00:04:05.785 19:41:59 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.785 00:04:05.785 00:04:05.785 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.785 http://cunit.sourceforge.net/ 00:04:05.785 00:04:05.785 00:04:05.785 Suite: pci 00:04:05.785 Test: pci_hook ...[2024-07-15 19:41:59.899490] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58672 has claimed it 00:04:05.785 passed 00:04:05.785 00:04:05.785 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.785 suites 1 1 n/a 0 0 00:04:05.785 tests 1 1 1 0 0 00:04:05.785 asserts 25 25 25 0 n/a 00:04:05.785 00:04:05.785 Elapsed time = 0.002 seconds 00:04:05.785 EAL: Cannot find device (10000:00:01.0) 00:04:05.785 EAL: Failed to attach device on primary process 00:04:05.785 00:04:05.785 real 0m0.019s 00:04:05.785 user 0m0.003s 00:04:05.785 sys 0m0.015s 00:04:05.785 19:41:59 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.785 19:41:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:05.785 ************************************ 00:04:05.785 END TEST env_pci 00:04:05.785 ************************************ 00:04:05.785 19:41:59 env -- common/autotest_common.sh@1142 -- # return 0 00:04:05.785 19:41:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.785 19:41:59 env -- env/env.sh@15 -- # uname 00:04:05.786 19:41:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.786 19:41:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.786 19:41:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.786 19:41:59 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:05.786 19:41:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.786 19:41:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.786 ************************************ 00:04:05.786 START TEST env_dpdk_post_init 00:04:05.786 ************************************ 00:04:05.786 19:41:59 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.786 EAL: Detected CPU lcores: 10 00:04:05.786 EAL: Detected NUMA nodes: 1 00:04:05.786 EAL: Detected shared linkage of DPDK 00:04:05.786 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.786 EAL: Selected IOVA mode 'PA' 00:04:06.044 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.044 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:06.044 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:06.044 Starting DPDK initialization... 00:04:06.044 Starting SPDK post initialization... 00:04:06.044 SPDK NVMe probe 00:04:06.044 Attaching to 0000:00:10.0 00:04:06.044 Attaching to 0000:00:11.0 00:04:06.044 Attached to 0000:00:10.0 00:04:06.044 Attached to 0000:00:11.0 00:04:06.044 Cleaning up... 00:04:06.044 00:04:06.044 real 0m0.185s 00:04:06.044 user 0m0.046s 00:04:06.044 sys 0m0.039s 00:04:06.044 19:42:00 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.044 19:42:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.044 ************************************ 00:04:06.044 END TEST env_dpdk_post_init 00:04:06.044 ************************************ 00:04:06.044 19:42:00 env -- common/autotest_common.sh@1142 -- # return 0 00:04:06.044 19:42:00 env -- env/env.sh@26 -- # uname 00:04:06.044 19:42:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.044 19:42:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.044 19:42:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.044 19:42:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.044 19:42:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.044 ************************************ 00:04:06.044 START TEST env_mem_callbacks 00:04:06.044 ************************************ 00:04:06.044 19:42:00 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.044 EAL: Detected CPU lcores: 10 00:04:06.044 EAL: Detected NUMA nodes: 1 00:04:06.044 EAL: Detected shared linkage of DPDK 00:04:06.044 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.044 EAL: Selected IOVA mode 'PA' 00:04:06.302 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.302 00:04:06.302 00:04:06.302 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.302 http://cunit.sourceforge.net/ 00:04:06.302 00:04:06.302 00:04:06.302 Suite: memory 00:04:06.302 Test: test ... 00:04:06.302 register 0x200000200000 2097152 00:04:06.302 malloc 3145728 00:04:06.302 register 0x200000400000 4194304 00:04:06.302 buf 0x200000500000 len 3145728 PASSED 00:04:06.302 malloc 64 00:04:06.302 buf 0x2000004fff40 len 64 PASSED 00:04:06.302 malloc 4194304 00:04:06.302 register 0x200000800000 6291456 00:04:06.302 buf 0x200000a00000 len 4194304 PASSED 00:04:06.302 free 0x200000500000 3145728 00:04:06.302 free 0x2000004fff40 64 00:04:06.302 unregister 0x200000400000 4194304 PASSED 00:04:06.302 free 0x200000a00000 4194304 00:04:06.303 unregister 0x200000800000 6291456 PASSED 00:04:06.303 malloc 8388608 00:04:06.303 register 0x200000400000 10485760 00:04:06.303 buf 0x200000600000 len 8388608 PASSED 00:04:06.303 free 0x200000600000 8388608 00:04:06.303 unregister 0x200000400000 10485760 PASSED 00:04:06.303 passed 00:04:06.303 00:04:06.303 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.303 suites 1 1 n/a 0 0 00:04:06.303 tests 1 1 1 0 0 00:04:06.303 asserts 15 15 15 0 n/a 00:04:06.303 00:04:06.303 Elapsed time = 0.007 seconds 00:04:06.303 00:04:06.303 real 0m0.143s 00:04:06.303 user 0m0.021s 00:04:06.303 sys 0m0.021s 00:04:06.303 19:42:00 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.303 ************************************ 00:04:06.303 END TEST env_mem_callbacks 00:04:06.303 ************************************ 00:04:06.303 19:42:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:06.303 19:42:00 env -- common/autotest_common.sh@1142 -- # return 0 00:04:06.303 00:04:06.303 real 0m2.378s 00:04:06.303 user 0m1.210s 00:04:06.303 sys 0m0.807s 00:04:06.303 19:42:00 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.303 19:42:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.303 ************************************ 00:04:06.303 END TEST env 00:04:06.303 ************************************ 00:04:06.303 19:42:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:06.303 19:42:00 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:06.303 19:42:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.303 19:42:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.303 19:42:00 -- common/autotest_common.sh@10 -- # set +x 00:04:06.303 ************************************ 00:04:06.303 START TEST rpc 00:04:06.303 ************************************ 00:04:06.303 19:42:00 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:06.303 * Looking for test storage... 00:04:06.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.303 19:42:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58781 00:04:06.303 19:42:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.303 19:42:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:06.303 19:42:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58781 00:04:06.303 19:42:00 rpc -- common/autotest_common.sh@829 -- # '[' -z 58781 ']' 00:04:06.303 19:42:00 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.303 19:42:00 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:06.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.303 19:42:00 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.303 19:42:00 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:06.303 19:42:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.562 [2024-07-15 19:42:00.577105] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:06.562 [2024-07-15 19:42:00.577201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58781 ] 00:04:06.562 [2024-07-15 19:42:00.717375] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.821 [2024-07-15 19:42:00.841720] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.821 [2024-07-15 19:42:00.841791] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58781' to capture a snapshot of events at runtime. 00:04:06.821 [2024-07-15 19:42:00.841806] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:06.821 [2024-07-15 19:42:00.841817] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:06.821 [2024-07-15 19:42:00.841826] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58781 for offline analysis/debug. 00:04:06.821 [2024-07-15 19:42:00.841871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.821 [2024-07-15 19:42:00.898674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:07.388 19:42:01 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:07.388 19:42:01 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:07.388 19:42:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.388 19:42:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.388 19:42:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:07.388 19:42:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:07.388 19:42:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.388 19:42:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.388 19:42:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.388 ************************************ 00:04:07.388 START TEST rpc_integrity 00:04:07.388 ************************************ 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.388 { 00:04:07.388 "name": "Malloc0", 00:04:07.388 "aliases": [ 00:04:07.388 "8255cace-80f4-4c45-a304-d90ee1f78e5b" 00:04:07.388 ], 00:04:07.388 "product_name": "Malloc disk", 00:04:07.388 "block_size": 512, 00:04:07.388 "num_blocks": 16384, 00:04:07.388 "uuid": "8255cace-80f4-4c45-a304-d90ee1f78e5b", 00:04:07.388 "assigned_rate_limits": { 00:04:07.388 "rw_ios_per_sec": 0, 00:04:07.388 "rw_mbytes_per_sec": 0, 00:04:07.388 "r_mbytes_per_sec": 0, 00:04:07.388 "w_mbytes_per_sec": 0 00:04:07.388 }, 00:04:07.388 "claimed": false, 00:04:07.388 "zoned": false, 00:04:07.388 "supported_io_types": { 00:04:07.388 "read": true, 00:04:07.388 "write": true, 00:04:07.388 "unmap": true, 00:04:07.388 "flush": true, 00:04:07.388 "reset": true, 00:04:07.388 "nvme_admin": false, 00:04:07.388 "nvme_io": false, 00:04:07.388 "nvme_io_md": false, 00:04:07.388 "write_zeroes": true, 00:04:07.388 "zcopy": true, 00:04:07.388 "get_zone_info": false, 00:04:07.388 "zone_management": false, 00:04:07.388 "zone_append": false, 00:04:07.388 "compare": false, 00:04:07.388 "compare_and_write": false, 00:04:07.388 "abort": true, 00:04:07.388 "seek_hole": false, 00:04:07.388 "seek_data": false, 00:04:07.388 "copy": true, 00:04:07.388 "nvme_iov_md": false 00:04:07.388 }, 00:04:07.388 "memory_domains": [ 00:04:07.388 { 00:04:07.388 "dma_device_id": "system", 00:04:07.388 "dma_device_type": 1 00:04:07.388 }, 00:04:07.388 { 00:04:07.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.388 "dma_device_type": 2 00:04:07.388 } 00:04:07.388 ], 00:04:07.388 "driver_specific": {} 00:04:07.388 } 00:04:07.388 ]' 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.388 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.388 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.388 [2024-07-15 19:42:01.620074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:07.388 [2024-07-15 19:42:01.620121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.388 [2024-07-15 19:42:01.620141] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13654d0 00:04:07.388 [2024-07-15 19:42:01.620150] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.388 [2024-07-15 19:42:01.621705] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.388 [2024-07-15 19:42:01.621742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.389 Passthru0 00:04:07.389 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.389 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.389 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.389 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.647 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.647 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.647 { 00:04:07.647 "name": "Malloc0", 00:04:07.647 "aliases": [ 00:04:07.647 "8255cace-80f4-4c45-a304-d90ee1f78e5b" 00:04:07.647 ], 00:04:07.647 "product_name": "Malloc disk", 00:04:07.647 "block_size": 512, 00:04:07.647 "num_blocks": 16384, 00:04:07.647 "uuid": "8255cace-80f4-4c45-a304-d90ee1f78e5b", 00:04:07.647 "assigned_rate_limits": { 00:04:07.647 "rw_ios_per_sec": 0, 00:04:07.647 "rw_mbytes_per_sec": 0, 00:04:07.647 "r_mbytes_per_sec": 0, 00:04:07.647 "w_mbytes_per_sec": 0 00:04:07.647 }, 00:04:07.647 "claimed": true, 00:04:07.647 "claim_type": "exclusive_write", 00:04:07.647 "zoned": false, 00:04:07.647 "supported_io_types": { 00:04:07.647 "read": true, 00:04:07.647 "write": true, 00:04:07.647 "unmap": true, 00:04:07.647 "flush": true, 00:04:07.647 "reset": true, 00:04:07.647 "nvme_admin": false, 00:04:07.647 "nvme_io": false, 00:04:07.647 "nvme_io_md": false, 00:04:07.647 "write_zeroes": true, 00:04:07.647 "zcopy": true, 00:04:07.647 "get_zone_info": false, 00:04:07.647 "zone_management": false, 00:04:07.647 "zone_append": false, 00:04:07.647 "compare": false, 00:04:07.647 "compare_and_write": false, 00:04:07.647 "abort": true, 00:04:07.647 "seek_hole": false, 00:04:07.647 "seek_data": false, 00:04:07.647 "copy": true, 00:04:07.647 "nvme_iov_md": false 00:04:07.647 }, 00:04:07.647 "memory_domains": [ 00:04:07.647 { 00:04:07.647 "dma_device_id": "system", 00:04:07.647 "dma_device_type": 1 00:04:07.647 }, 00:04:07.647 { 00:04:07.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.647 "dma_device_type": 2 00:04:07.647 } 00:04:07.647 ], 00:04:07.647 "driver_specific": {} 00:04:07.647 }, 00:04:07.647 { 00:04:07.647 "name": "Passthru0", 00:04:07.647 "aliases": [ 00:04:07.647 "bbcdd592-5d9e-5819-b868-b6f0f519631c" 00:04:07.647 ], 00:04:07.647 "product_name": "passthru", 00:04:07.647 "block_size": 512, 00:04:07.647 "num_blocks": 16384, 00:04:07.647 "uuid": "bbcdd592-5d9e-5819-b868-b6f0f519631c", 00:04:07.647 "assigned_rate_limits": { 00:04:07.647 "rw_ios_per_sec": 0, 00:04:07.647 "rw_mbytes_per_sec": 0, 00:04:07.647 "r_mbytes_per_sec": 0, 00:04:07.647 "w_mbytes_per_sec": 0 00:04:07.647 }, 00:04:07.647 "claimed": false, 00:04:07.647 "zoned": false, 00:04:07.647 "supported_io_types": { 00:04:07.647 "read": true, 00:04:07.647 "write": true, 00:04:07.647 "unmap": true, 00:04:07.647 "flush": true, 00:04:07.647 "reset": true, 00:04:07.647 "nvme_admin": false, 00:04:07.647 "nvme_io": false, 00:04:07.647 "nvme_io_md": false, 00:04:07.647 "write_zeroes": true, 00:04:07.647 "zcopy": true, 00:04:07.647 "get_zone_info": false, 00:04:07.647 "zone_management": false, 00:04:07.647 "zone_append": false, 00:04:07.647 "compare": false, 00:04:07.647 "compare_and_write": false, 00:04:07.647 "abort": true, 00:04:07.647 "seek_hole": false, 00:04:07.647 "seek_data": false, 00:04:07.647 "copy": true, 00:04:07.647 "nvme_iov_md": false 00:04:07.647 }, 00:04:07.647 "memory_domains": [ 00:04:07.647 { 00:04:07.647 "dma_device_id": "system", 00:04:07.647 "dma_device_type": 1 00:04:07.648 }, 00:04:07.648 { 00:04:07.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.648 "dma_device_type": 2 00:04:07.648 } 00:04:07.648 ], 00:04:07.648 "driver_specific": { 00:04:07.648 "passthru": { 00:04:07.648 "name": "Passthru0", 00:04:07.648 "base_bdev_name": "Malloc0" 00:04:07.648 } 00:04:07.648 } 00:04:07.648 } 00:04:07.648 ]' 00:04:07.648 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.648 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.648 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.648 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.648 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.648 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.648 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.648 19:42:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.648 00:04:07.648 real 0m0.307s 00:04:07.648 user 0m0.197s 00:04:07.648 sys 0m0.040s 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.648 19:42:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.648 ************************************ 00:04:07.648 END TEST rpc_integrity 00:04:07.648 ************************************ 00:04:07.648 19:42:01 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:07.648 19:42:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:07.648 19:42:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.648 19:42:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.648 19:42:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.648 ************************************ 00:04:07.648 START TEST rpc_plugins 00:04:07.648 ************************************ 00:04:07.648 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:07.648 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:07.648 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.648 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.648 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.648 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:07.648 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:07.648 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.648 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.648 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.648 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:07.648 { 00:04:07.648 "name": "Malloc1", 00:04:07.648 "aliases": [ 00:04:07.648 "14296a46-aabc-42f6-baad-f0fbad57f569" 00:04:07.648 ], 00:04:07.648 "product_name": "Malloc disk", 00:04:07.648 "block_size": 4096, 00:04:07.648 "num_blocks": 256, 00:04:07.648 "uuid": "14296a46-aabc-42f6-baad-f0fbad57f569", 00:04:07.648 "assigned_rate_limits": { 00:04:07.648 "rw_ios_per_sec": 0, 00:04:07.648 "rw_mbytes_per_sec": 0, 00:04:07.648 "r_mbytes_per_sec": 0, 00:04:07.648 "w_mbytes_per_sec": 0 00:04:07.648 }, 00:04:07.648 "claimed": false, 00:04:07.648 "zoned": false, 00:04:07.648 "supported_io_types": { 00:04:07.648 "read": true, 00:04:07.648 "write": true, 00:04:07.648 "unmap": true, 00:04:07.648 "flush": true, 00:04:07.648 "reset": true, 00:04:07.648 "nvme_admin": false, 00:04:07.648 "nvme_io": false, 00:04:07.648 "nvme_io_md": false, 00:04:07.648 "write_zeroes": true, 00:04:07.648 "zcopy": true, 00:04:07.648 "get_zone_info": false, 00:04:07.648 "zone_management": false, 00:04:07.648 "zone_append": false, 00:04:07.648 "compare": false, 00:04:07.648 "compare_and_write": false, 00:04:07.648 "abort": true, 00:04:07.648 "seek_hole": false, 00:04:07.648 "seek_data": false, 00:04:07.648 "copy": true, 00:04:07.648 "nvme_iov_md": false 00:04:07.648 }, 00:04:07.648 "memory_domains": [ 00:04:07.648 { 00:04:07.648 "dma_device_id": "system", 00:04:07.648 "dma_device_type": 1 00:04:07.648 }, 00:04:07.648 { 00:04:07.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.648 "dma_device_type": 2 00:04:07.648 } 00:04:07.648 ], 00:04:07.648 "driver_specific": {} 00:04:07.648 } 00:04:07.648 ]' 00:04:07.648 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:07.907 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:07.907 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:07.907 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.907 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.907 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.907 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:07.907 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.907 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.907 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.907 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:07.907 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:07.907 19:42:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:07.907 00:04:07.907 real 0m0.161s 00:04:07.907 user 0m0.105s 00:04:07.907 sys 0m0.019s 00:04:07.907 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.907 ************************************ 00:04:07.907 END TEST rpc_plugins 00:04:07.907 19:42:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.907 ************************************ 00:04:07.907 19:42:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:07.907 19:42:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:07.907 19:42:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.907 19:42:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.907 19:42:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.907 ************************************ 00:04:07.907 START TEST rpc_trace_cmd_test 00:04:07.907 ************************************ 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:07.907 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58781", 00:04:07.907 "tpoint_group_mask": "0x8", 00:04:07.907 "iscsi_conn": { 00:04:07.907 "mask": "0x2", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "scsi": { 00:04:07.907 "mask": "0x4", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "bdev": { 00:04:07.907 "mask": "0x8", 00:04:07.907 "tpoint_mask": "0xffffffffffffffff" 00:04:07.907 }, 00:04:07.907 "nvmf_rdma": { 00:04:07.907 "mask": "0x10", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "nvmf_tcp": { 00:04:07.907 "mask": "0x20", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "ftl": { 00:04:07.907 "mask": "0x40", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "blobfs": { 00:04:07.907 "mask": "0x80", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "dsa": { 00:04:07.907 "mask": "0x200", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "thread": { 00:04:07.907 "mask": "0x400", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "nvme_pcie": { 00:04:07.907 "mask": "0x800", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "iaa": { 00:04:07.907 "mask": "0x1000", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "nvme_tcp": { 00:04:07.907 "mask": "0x2000", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "bdev_nvme": { 00:04:07.907 "mask": "0x4000", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 }, 00:04:07.907 "sock": { 00:04:07.907 "mask": "0x8000", 00:04:07.907 "tpoint_mask": "0x0" 00:04:07.907 } 00:04:07.907 }' 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:07.907 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:08.165 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:08.165 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.165 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.165 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:08.165 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:08.165 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:08.165 19:42:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:08.165 00:04:08.165 real 0m0.266s 00:04:08.165 user 0m0.231s 00:04:08.165 sys 0m0.027s 00:04:08.165 19:42:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.165 ************************************ 00:04:08.165 END TEST rpc_trace_cmd_test 00:04:08.165 ************************************ 00:04:08.165 19:42:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.165 19:42:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:08.165 19:42:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:08.165 19:42:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:08.165 19:42:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:08.165 19:42:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.165 19:42:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.165 19:42:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.165 ************************************ 00:04:08.165 START TEST rpc_daemon_integrity 00:04:08.165 ************************************ 00:04:08.165 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:08.165 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.165 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.165 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.165 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.165 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.165 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.492 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.492 { 00:04:08.492 "name": "Malloc2", 00:04:08.492 "aliases": [ 00:04:08.492 "1e5ee388-a1ec-41bf-8e8c-42d9a07a3090" 00:04:08.492 ], 00:04:08.492 "product_name": "Malloc disk", 00:04:08.492 "block_size": 512, 00:04:08.492 "num_blocks": 16384, 00:04:08.492 "uuid": "1e5ee388-a1ec-41bf-8e8c-42d9a07a3090", 00:04:08.492 "assigned_rate_limits": { 00:04:08.492 "rw_ios_per_sec": 0, 00:04:08.492 "rw_mbytes_per_sec": 0, 00:04:08.492 "r_mbytes_per_sec": 0, 00:04:08.492 "w_mbytes_per_sec": 0 00:04:08.492 }, 00:04:08.492 "claimed": false, 00:04:08.492 "zoned": false, 00:04:08.492 "supported_io_types": { 00:04:08.492 "read": true, 00:04:08.492 "write": true, 00:04:08.492 "unmap": true, 00:04:08.492 "flush": true, 00:04:08.492 "reset": true, 00:04:08.492 "nvme_admin": false, 00:04:08.492 "nvme_io": false, 00:04:08.492 "nvme_io_md": false, 00:04:08.492 "write_zeroes": true, 00:04:08.492 "zcopy": true, 00:04:08.492 "get_zone_info": false, 00:04:08.492 "zone_management": false, 00:04:08.492 "zone_append": false, 00:04:08.492 "compare": false, 00:04:08.492 "compare_and_write": false, 00:04:08.492 "abort": true, 00:04:08.492 "seek_hole": false, 00:04:08.492 "seek_data": false, 00:04:08.492 "copy": true, 00:04:08.492 "nvme_iov_md": false 00:04:08.492 }, 00:04:08.492 "memory_domains": [ 00:04:08.492 { 00:04:08.492 "dma_device_id": "system", 00:04:08.493 "dma_device_type": 1 00:04:08.493 }, 00:04:08.493 { 00:04:08.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.493 "dma_device_type": 2 00:04:08.493 } 00:04:08.493 ], 00:04:08.493 "driver_specific": {} 00:04:08.493 } 00:04:08.493 ]' 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.493 [2024-07-15 19:42:02.514114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:08.493 [2024-07-15 19:42:02.514227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.493 [2024-07-15 19:42:02.514302] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x141d3a0 00:04:08.493 [2024-07-15 19:42:02.514328] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.493 [2024-07-15 19:42:02.517444] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.493 [2024-07-15 19:42:02.517543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.493 Passthru0 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.493 { 00:04:08.493 "name": "Malloc2", 00:04:08.493 "aliases": [ 00:04:08.493 "1e5ee388-a1ec-41bf-8e8c-42d9a07a3090" 00:04:08.493 ], 00:04:08.493 "product_name": "Malloc disk", 00:04:08.493 "block_size": 512, 00:04:08.493 "num_blocks": 16384, 00:04:08.493 "uuid": "1e5ee388-a1ec-41bf-8e8c-42d9a07a3090", 00:04:08.493 "assigned_rate_limits": { 00:04:08.493 "rw_ios_per_sec": 0, 00:04:08.493 "rw_mbytes_per_sec": 0, 00:04:08.493 "r_mbytes_per_sec": 0, 00:04:08.493 "w_mbytes_per_sec": 0 00:04:08.493 }, 00:04:08.493 "claimed": true, 00:04:08.493 "claim_type": "exclusive_write", 00:04:08.493 "zoned": false, 00:04:08.493 "supported_io_types": { 00:04:08.493 "read": true, 00:04:08.493 "write": true, 00:04:08.493 "unmap": true, 00:04:08.493 "flush": true, 00:04:08.493 "reset": true, 00:04:08.493 "nvme_admin": false, 00:04:08.493 "nvme_io": false, 00:04:08.493 "nvme_io_md": false, 00:04:08.493 "write_zeroes": true, 00:04:08.493 "zcopy": true, 00:04:08.493 "get_zone_info": false, 00:04:08.493 "zone_management": false, 00:04:08.493 "zone_append": false, 00:04:08.493 "compare": false, 00:04:08.493 "compare_and_write": false, 00:04:08.493 "abort": true, 00:04:08.493 "seek_hole": false, 00:04:08.493 "seek_data": false, 00:04:08.493 "copy": true, 00:04:08.493 "nvme_iov_md": false 00:04:08.493 }, 00:04:08.493 "memory_domains": [ 00:04:08.493 { 00:04:08.493 "dma_device_id": "system", 00:04:08.493 "dma_device_type": 1 00:04:08.493 }, 00:04:08.493 { 00:04:08.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.493 "dma_device_type": 2 00:04:08.493 } 00:04:08.493 ], 00:04:08.493 "driver_specific": {} 00:04:08.493 }, 00:04:08.493 { 00:04:08.493 "name": "Passthru0", 00:04:08.493 "aliases": [ 00:04:08.493 "85a11f63-eced-529e-9404-c3c9224ced77" 00:04:08.493 ], 00:04:08.493 "product_name": "passthru", 00:04:08.493 "block_size": 512, 00:04:08.493 "num_blocks": 16384, 00:04:08.493 "uuid": "85a11f63-eced-529e-9404-c3c9224ced77", 00:04:08.493 "assigned_rate_limits": { 00:04:08.493 "rw_ios_per_sec": 0, 00:04:08.493 "rw_mbytes_per_sec": 0, 00:04:08.493 "r_mbytes_per_sec": 0, 00:04:08.493 "w_mbytes_per_sec": 0 00:04:08.493 }, 00:04:08.493 "claimed": false, 00:04:08.493 "zoned": false, 00:04:08.493 "supported_io_types": { 00:04:08.493 "read": true, 00:04:08.493 "write": true, 00:04:08.493 "unmap": true, 00:04:08.493 "flush": true, 00:04:08.493 "reset": true, 00:04:08.493 "nvme_admin": false, 00:04:08.493 "nvme_io": false, 00:04:08.493 "nvme_io_md": false, 00:04:08.493 "write_zeroes": true, 00:04:08.493 "zcopy": true, 00:04:08.493 "get_zone_info": false, 00:04:08.493 "zone_management": false, 00:04:08.493 "zone_append": false, 00:04:08.493 "compare": false, 00:04:08.493 "compare_and_write": false, 00:04:08.493 "abort": true, 00:04:08.493 "seek_hole": false, 00:04:08.493 "seek_data": false, 00:04:08.493 "copy": true, 00:04:08.493 "nvme_iov_md": false 00:04:08.493 }, 00:04:08.493 "memory_domains": [ 00:04:08.493 { 00:04:08.493 "dma_device_id": "system", 00:04:08.493 "dma_device_type": 1 00:04:08.493 }, 00:04:08.493 { 00:04:08.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.493 "dma_device_type": 2 00:04:08.493 } 00:04:08.493 ], 00:04:08.493 "driver_specific": { 00:04:08.493 "passthru": { 00:04:08.493 "name": "Passthru0", 00:04:08.493 "base_bdev_name": "Malloc2" 00:04:08.493 } 00:04:08.493 } 00:04:08.493 } 00:04:08.493 ]' 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.493 00:04:08.493 real 0m0.312s 00:04:08.493 user 0m0.210s 00:04:08.493 sys 0m0.039s 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.493 ************************************ 00:04:08.493 END TEST rpc_daemon_integrity 00:04:08.493 19:42:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.493 ************************************ 00:04:08.751 19:42:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:08.751 19:42:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:08.751 19:42:02 rpc -- rpc/rpc.sh@84 -- # killprocess 58781 00:04:08.751 19:42:02 rpc -- common/autotest_common.sh@948 -- # '[' -z 58781 ']' 00:04:08.751 19:42:02 rpc -- common/autotest_common.sh@952 -- # kill -0 58781 00:04:08.751 19:42:02 rpc -- common/autotest_common.sh@953 -- # uname 00:04:08.751 19:42:02 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:08.751 19:42:02 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58781 00:04:08.751 19:42:02 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:08.751 19:42:02 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:08.752 19:42:02 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58781' 00:04:08.752 killing process with pid 58781 00:04:08.752 19:42:02 rpc -- common/autotest_common.sh@967 -- # kill 58781 00:04:08.752 19:42:02 rpc -- common/autotest_common.sh@972 -- # wait 58781 00:04:09.010 00:04:09.010 real 0m2.705s 00:04:09.010 user 0m3.450s 00:04:09.010 sys 0m0.658s 00:04:09.010 19:42:03 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.010 ************************************ 00:04:09.010 END TEST rpc 00:04:09.010 ************************************ 00:04:09.010 19:42:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.010 19:42:03 -- common/autotest_common.sh@1142 -- # return 0 00:04:09.010 19:42:03 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:09.010 19:42:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.010 19:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.010 19:42:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.010 ************************************ 00:04:09.010 START TEST skip_rpc 00:04:09.010 ************************************ 00:04:09.010 19:42:03 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:09.268 * Looking for test storage... 00:04:09.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:09.268 19:42:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:09.268 19:42:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:09.268 19:42:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:09.268 19:42:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.268 19:42:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.268 19:42:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.268 ************************************ 00:04:09.268 START TEST skip_rpc 00:04:09.268 ************************************ 00:04:09.268 19:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:09.268 19:42:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58974 00:04:09.268 19:42:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.268 19:42:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:09.268 19:42:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:09.268 [2024-07-15 19:42:03.341754] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:09.268 [2024-07-15 19:42:03.341863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:04:09.268 [2024-07-15 19:42:03.478438] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.525 [2024-07-15 19:42:03.589769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.525 [2024-07-15 19:42:03.645461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58974 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58974 ']' 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58974 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58974 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:14.793 killing process with pid 58974 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58974' 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58974 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58974 00:04:14.793 00:04:14.793 real 0m5.410s 00:04:14.793 user 0m5.024s 00:04:14.793 sys 0m0.284s 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.793 19:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.793 ************************************ 00:04:14.793 END TEST skip_rpc 00:04:14.793 ************************************ 00:04:14.793 19:42:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:14.793 19:42:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:14.793 19:42:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.793 19:42:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.793 19:42:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.793 ************************************ 00:04:14.793 START TEST skip_rpc_with_json 00:04:14.793 ************************************ 00:04:14.793 19:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:14.793 19:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:14.793 19:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59059 00:04:14.793 19:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.793 19:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59059 00:04:14.793 19:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.793 19:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59059 ']' 00:04:14.793 19:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.793 19:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.794 19:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.794 19:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.794 19:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.794 [2024-07-15 19:42:08.802010] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:14.794 [2024-07-15 19:42:08.802083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:04:14.794 [2024-07-15 19:42:08.933585] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.053 [2024-07-15 19:42:09.047585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.053 [2024-07-15 19:42:09.101021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.622 [2024-07-15 19:42:09.795176] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:15.622 request: 00:04:15.622 { 00:04:15.622 "trtype": "tcp", 00:04:15.622 "method": "nvmf_get_transports", 00:04:15.622 "req_id": 1 00:04:15.622 } 00:04:15.622 Got JSON-RPC error response 00:04:15.622 response: 00:04:15.622 { 00:04:15.622 "code": -19, 00:04:15.622 "message": "No such device" 00:04:15.622 } 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.622 [2024-07-15 19:42:09.807265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.622 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.881 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.881 19:42:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:15.881 { 00:04:15.881 "subsystems": [ 00:04:15.881 { 00:04:15.881 "subsystem": "keyring", 00:04:15.881 "config": [] 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "subsystem": "iobuf", 00:04:15.881 "config": [ 00:04:15.881 { 00:04:15.881 "method": "iobuf_set_options", 00:04:15.881 "params": { 00:04:15.881 "small_pool_count": 8192, 00:04:15.881 "large_pool_count": 1024, 00:04:15.881 "small_bufsize": 8192, 00:04:15.881 "large_bufsize": 135168 00:04:15.881 } 00:04:15.881 } 00:04:15.881 ] 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "subsystem": "sock", 00:04:15.881 "config": [ 00:04:15.881 { 00:04:15.881 "method": "sock_set_default_impl", 00:04:15.881 "params": { 00:04:15.881 "impl_name": "uring" 00:04:15.881 } 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "method": "sock_impl_set_options", 00:04:15.881 "params": { 00:04:15.881 "impl_name": "ssl", 00:04:15.881 "recv_buf_size": 4096, 00:04:15.881 "send_buf_size": 4096, 00:04:15.881 "enable_recv_pipe": true, 00:04:15.881 "enable_quickack": false, 00:04:15.881 "enable_placement_id": 0, 00:04:15.881 "enable_zerocopy_send_server": true, 00:04:15.881 "enable_zerocopy_send_client": false, 00:04:15.881 "zerocopy_threshold": 0, 00:04:15.881 "tls_version": 0, 00:04:15.881 "enable_ktls": false 00:04:15.881 } 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "method": "sock_impl_set_options", 00:04:15.881 "params": { 00:04:15.881 "impl_name": "posix", 00:04:15.881 "recv_buf_size": 2097152, 00:04:15.881 "send_buf_size": 2097152, 00:04:15.881 "enable_recv_pipe": true, 00:04:15.881 "enable_quickack": false, 00:04:15.881 "enable_placement_id": 0, 00:04:15.881 "enable_zerocopy_send_server": true, 00:04:15.881 "enable_zerocopy_send_client": false, 00:04:15.881 "zerocopy_threshold": 0, 00:04:15.881 "tls_version": 0, 00:04:15.881 "enable_ktls": false 00:04:15.881 } 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "method": "sock_impl_set_options", 00:04:15.881 "params": { 00:04:15.881 "impl_name": "uring", 00:04:15.881 "recv_buf_size": 2097152, 00:04:15.881 "send_buf_size": 2097152, 00:04:15.881 "enable_recv_pipe": true, 00:04:15.881 "enable_quickack": false, 00:04:15.881 "enable_placement_id": 0, 00:04:15.881 "enable_zerocopy_send_server": false, 00:04:15.881 "enable_zerocopy_send_client": false, 00:04:15.881 "zerocopy_threshold": 0, 00:04:15.881 "tls_version": 0, 00:04:15.881 "enable_ktls": false 00:04:15.881 } 00:04:15.881 } 00:04:15.881 ] 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "subsystem": "vmd", 00:04:15.881 "config": [] 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "subsystem": "accel", 00:04:15.881 "config": [ 00:04:15.881 { 00:04:15.881 "method": "accel_set_options", 00:04:15.881 "params": { 00:04:15.881 "small_cache_size": 128, 00:04:15.881 "large_cache_size": 16, 00:04:15.881 "task_count": 2048, 00:04:15.881 "sequence_count": 2048, 00:04:15.881 "buf_count": 2048 00:04:15.881 } 00:04:15.881 } 00:04:15.881 ] 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "subsystem": "bdev", 00:04:15.881 "config": [ 00:04:15.881 { 00:04:15.881 "method": "bdev_set_options", 00:04:15.881 "params": { 00:04:15.881 "bdev_io_pool_size": 65535, 00:04:15.881 "bdev_io_cache_size": 256, 00:04:15.881 "bdev_auto_examine": true, 00:04:15.881 "iobuf_small_cache_size": 128, 00:04:15.881 "iobuf_large_cache_size": 16 00:04:15.881 } 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "method": "bdev_raid_set_options", 00:04:15.881 "params": { 00:04:15.881 "process_window_size_kb": 1024 00:04:15.881 } 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "method": "bdev_iscsi_set_options", 00:04:15.881 "params": { 00:04:15.881 "timeout_sec": 30 00:04:15.881 } 00:04:15.881 }, 00:04:15.881 { 00:04:15.881 "method": "bdev_nvme_set_options", 00:04:15.881 "params": { 00:04:15.881 "action_on_timeout": "none", 00:04:15.881 "timeout_us": 0, 00:04:15.881 "timeout_admin_us": 0, 00:04:15.881 "keep_alive_timeout_ms": 10000, 00:04:15.881 "arbitration_burst": 0, 00:04:15.881 "low_priority_weight": 0, 00:04:15.881 "medium_priority_weight": 0, 00:04:15.881 "high_priority_weight": 0, 00:04:15.881 "nvme_adminq_poll_period_us": 10000, 00:04:15.881 "nvme_ioq_poll_period_us": 0, 00:04:15.881 "io_queue_requests": 0, 00:04:15.881 "delay_cmd_submit": true, 00:04:15.881 "transport_retry_count": 4, 00:04:15.881 "bdev_retry_count": 3, 00:04:15.881 "transport_ack_timeout": 0, 00:04:15.881 "ctrlr_loss_timeout_sec": 0, 00:04:15.881 "reconnect_delay_sec": 0, 00:04:15.881 "fast_io_fail_timeout_sec": 0, 00:04:15.881 "disable_auto_failback": false, 00:04:15.881 "generate_uuids": false, 00:04:15.881 "transport_tos": 0, 00:04:15.881 "nvme_error_stat": false, 00:04:15.881 "rdma_srq_size": 0, 00:04:15.881 "io_path_stat": false, 00:04:15.882 "allow_accel_sequence": false, 00:04:15.882 "rdma_max_cq_size": 0, 00:04:15.882 "rdma_cm_event_timeout_ms": 0, 00:04:15.882 "dhchap_digests": [ 00:04:15.882 "sha256", 00:04:15.882 "sha384", 00:04:15.882 "sha512" 00:04:15.882 ], 00:04:15.882 "dhchap_dhgroups": [ 00:04:15.882 "null", 00:04:15.882 "ffdhe2048", 00:04:15.882 "ffdhe3072", 00:04:15.882 "ffdhe4096", 00:04:15.882 "ffdhe6144", 00:04:15.882 "ffdhe8192" 00:04:15.882 ] 00:04:15.882 } 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "method": "bdev_nvme_set_hotplug", 00:04:15.882 "params": { 00:04:15.882 "period_us": 100000, 00:04:15.882 "enable": false 00:04:15.882 } 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "method": "bdev_wait_for_examine" 00:04:15.882 } 00:04:15.882 ] 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "subsystem": "scsi", 00:04:15.882 "config": null 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "subsystem": "scheduler", 00:04:15.882 "config": [ 00:04:15.882 { 00:04:15.882 "method": "framework_set_scheduler", 00:04:15.882 "params": { 00:04:15.882 "name": "static" 00:04:15.882 } 00:04:15.882 } 00:04:15.882 ] 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "subsystem": "vhost_scsi", 00:04:15.882 "config": [] 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "subsystem": "vhost_blk", 00:04:15.882 "config": [] 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "subsystem": "ublk", 00:04:15.882 "config": [] 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "subsystem": "nbd", 00:04:15.882 "config": [] 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "subsystem": "nvmf", 00:04:15.882 "config": [ 00:04:15.882 { 00:04:15.882 "method": "nvmf_set_config", 00:04:15.882 "params": { 00:04:15.882 "discovery_filter": "match_any", 00:04:15.882 "admin_cmd_passthru": { 00:04:15.882 "identify_ctrlr": false 00:04:15.882 } 00:04:15.882 } 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "method": "nvmf_set_max_subsystems", 00:04:15.882 "params": { 00:04:15.882 "max_subsystems": 1024 00:04:15.882 } 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "method": "nvmf_set_crdt", 00:04:15.882 "params": { 00:04:15.882 "crdt1": 0, 00:04:15.882 "crdt2": 0, 00:04:15.882 "crdt3": 0 00:04:15.882 } 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "method": "nvmf_create_transport", 00:04:15.882 "params": { 00:04:15.882 "trtype": "TCP", 00:04:15.882 "max_queue_depth": 128, 00:04:15.882 "max_io_qpairs_per_ctrlr": 127, 00:04:15.882 "in_capsule_data_size": 4096, 00:04:15.882 "max_io_size": 131072, 00:04:15.882 "io_unit_size": 131072, 00:04:15.882 "max_aq_depth": 128, 00:04:15.882 "num_shared_buffers": 511, 00:04:15.882 "buf_cache_size": 4294967295, 00:04:15.882 "dif_insert_or_strip": false, 00:04:15.882 "zcopy": false, 00:04:15.882 "c2h_success": true, 00:04:15.882 "sock_priority": 0, 00:04:15.882 "abort_timeout_sec": 1, 00:04:15.882 "ack_timeout": 0, 00:04:15.882 "data_wr_pool_size": 0 00:04:15.882 } 00:04:15.882 } 00:04:15.882 ] 00:04:15.882 }, 00:04:15.882 { 00:04:15.882 "subsystem": "iscsi", 00:04:15.882 "config": [ 00:04:15.882 { 00:04:15.882 "method": "iscsi_set_options", 00:04:15.882 "params": { 00:04:15.882 "node_base": "iqn.2016-06.io.spdk", 00:04:15.882 "max_sessions": 128, 00:04:15.882 "max_connections_per_session": 2, 00:04:15.882 "max_queue_depth": 64, 00:04:15.882 "default_time2wait": 2, 00:04:15.882 "default_time2retain": 20, 00:04:15.882 "first_burst_length": 8192, 00:04:15.882 "immediate_data": true, 00:04:15.882 "allow_duplicated_isid": false, 00:04:15.882 "error_recovery_level": 0, 00:04:15.882 "nop_timeout": 60, 00:04:15.882 "nop_in_interval": 30, 00:04:15.882 "disable_chap": false, 00:04:15.882 "require_chap": false, 00:04:15.882 "mutual_chap": false, 00:04:15.882 "chap_group": 0, 00:04:15.882 "max_large_datain_per_connection": 64, 00:04:15.882 "max_r2t_per_connection": 4, 00:04:15.882 "pdu_pool_size": 36864, 00:04:15.882 "immediate_data_pool_size": 16384, 00:04:15.882 "data_out_pool_size": 2048 00:04:15.882 } 00:04:15.882 } 00:04:15.882 ] 00:04:15.882 } 00:04:15.882 ] 00:04:15.882 } 00:04:15.882 19:42:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:15.882 19:42:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59059 00:04:15.882 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59059 ']' 00:04:15.882 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59059 00:04:15.882 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:15.882 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:15.882 19:42:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59059 00:04:15.882 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:15.882 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:15.882 killing process with pid 59059 00:04:15.882 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59059' 00:04:15.882 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59059 00:04:15.882 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59059 00:04:16.449 19:42:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59088 00:04:16.449 19:42:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.449 19:42:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59088 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59088 ']' 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59088 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59088 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59088' 00:04:21.715 killing process with pid 59088 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59088 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59088 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.715 00:04:21.715 real 0m7.094s 00:04:21.715 user 0m6.799s 00:04:21.715 sys 0m0.681s 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.715 ************************************ 00:04:21.715 END TEST skip_rpc_with_json 00:04:21.715 ************************************ 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.715 19:42:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.715 19:42:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:21.715 19:42:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.715 19:42:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.715 19:42:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.715 ************************************ 00:04:21.715 START TEST skip_rpc_with_delay 00:04:21.715 ************************************ 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:21.715 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.715 [2024-07-15 19:42:15.954809] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:21.715 [2024-07-15 19:42:15.954943] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:21.973 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:21.973 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:21.973 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:21.973 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:21.973 00:04:21.973 real 0m0.091s 00:04:21.973 user 0m0.060s 00:04:21.973 sys 0m0.030s 00:04:21.973 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.973 19:42:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:21.973 ************************************ 00:04:21.973 END TEST skip_rpc_with_delay 00:04:21.973 ************************************ 00:04:21.973 19:42:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.973 19:42:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:21.973 19:42:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:21.973 19:42:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:21.973 19:42:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.973 19:42:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.973 19:42:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.973 ************************************ 00:04:21.973 START TEST exit_on_failed_rpc_init 00:04:21.973 ************************************ 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59202 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59202 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59202 ']' 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:21.973 19:42:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.973 [2024-07-15 19:42:16.093189] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:21.973 [2024-07-15 19:42:16.093292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59202 ] 00:04:22.232 [2024-07-15 19:42:16.229458] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.232 [2024-07-15 19:42:16.358069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.232 [2024-07-15 19:42:16.415306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:22.836 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.095 [2024-07-15 19:42:17.093811] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:23.095 [2024-07-15 19:42:17.093913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59216 ] 00:04:23.095 [2024-07-15 19:42:17.234334] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.354 [2024-07-15 19:42:17.353318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.354 [2024-07-15 19:42:17.353423] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.354 [2024-07-15 19:42:17.353440] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.354 [2024-07-15 19:42:17.353450] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59202 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59202 ']' 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59202 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59202 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:23.354 killing process with pid 59202 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59202' 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59202 00:04:23.354 19:42:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59202 00:04:23.922 00:04:23.922 real 0m2.006s 00:04:23.922 user 0m2.297s 00:04:23.922 sys 0m0.432s 00:04:23.922 19:42:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.922 19:42:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.922 ************************************ 00:04:23.922 END TEST exit_on_failed_rpc_init 00:04:23.922 ************************************ 00:04:23.922 19:42:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:23.922 19:42:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.922 00:04:23.922 real 0m14.900s 00:04:23.922 user 0m14.280s 00:04:23.922 sys 0m1.609s 00:04:23.922 19:42:18 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.922 19:42:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.922 ************************************ 00:04:23.922 END TEST skip_rpc 00:04:23.922 ************************************ 00:04:23.922 19:42:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:23.922 19:42:18 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:23.922 19:42:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.922 19:42:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.922 19:42:18 -- common/autotest_common.sh@10 -- # set +x 00:04:23.922 ************************************ 00:04:23.922 START TEST rpc_client 00:04:23.922 ************************************ 00:04:23.922 19:42:18 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.181 * Looking for test storage... 00:04:24.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:24.182 19:42:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:24.182 OK 00:04:24.182 19:42:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.182 00:04:24.182 real 0m0.103s 00:04:24.182 user 0m0.044s 00:04:24.182 sys 0m0.066s 00:04:24.182 19:42:18 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.182 19:42:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.182 ************************************ 00:04:24.182 END TEST rpc_client 00:04:24.182 ************************************ 00:04:24.182 19:42:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.182 19:42:18 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.182 19:42:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.182 19:42:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.182 19:42:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.182 ************************************ 00:04:24.182 START TEST json_config 00:04:24.182 ************************************ 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.182 19:42:18 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.182 19:42:18 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.182 19:42:18 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.182 19:42:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.182 19:42:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.182 19:42:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.182 19:42:18 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.182 19:42:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@47 -- # : 0 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:24.182 19:42:18 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.182 INFO: JSON configuration test init 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.182 19:42:18 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.182 19:42:18 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.182 19:42:18 json_config -- json_config/common.sh@10 -- # shift 00:04:24.182 19:42:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.182 19:42:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.182 19:42:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.182 19:42:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.182 19:42:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.182 19:42:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59339 00:04:24.182 19:42:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.182 Waiting for target to run... 00:04:24.182 19:42:18 json_config -- json_config/common.sh@25 -- # waitforlisten 59339 /var/tmp/spdk_tgt.sock 00:04:24.182 19:42:18 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@829 -- # '[' -z 59339 ']' 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.182 19:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.441 [2024-07-15 19:42:18.451982] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:24.441 [2024-07-15 19:42:18.452084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59339 ] 00:04:24.701 [2024-07-15 19:42:18.875676] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.960 [2024-07-15 19:42:18.981686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.528 00:04:25.528 19:42:19 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.528 19:42:19 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:25.528 19:42:19 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.528 19:42:19 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:25.528 19:42:19 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:25.528 19:42:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.528 19:42:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.528 19:42:19 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:25.528 19:42:19 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:25.528 19:42:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.528 19:42:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.528 19:42:19 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.528 19:42:19 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:25.528 19:42:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:25.786 [2024-07-15 19:42:19.838557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:26.044 19:42:20 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:26.044 19:42:20 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.044 19:42:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.044 19:42:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.044 19:42:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.044 19:42:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.044 19:42:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.044 19:42:20 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:26.044 19:42:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.044 19:42:20 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:26.326 19:42:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.326 19:42:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:26.326 19:42:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.326 19:42:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:26.326 19:42:20 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.326 19:42:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.583 MallocForNvmf0 00:04:26.583 19:42:20 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.583 19:42:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.883 MallocForNvmf1 00:04:26.883 19:42:20 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.883 19:42:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.141 [2024-07-15 19:42:21.180434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.141 19:42:21 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.141 19:42:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.399 19:42:21 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.399 19:42:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.657 19:42:21 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.657 19:42:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.915 19:42:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.915 19:42:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:28.173 [2024-07-15 19:42:22.293060] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:28.173 19:42:22 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:28.173 19:42:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.173 19:42:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.173 19:42:22 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:28.173 19:42:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.173 19:42:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.173 19:42:22 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:28.173 19:42:22 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.173 19:42:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.432 MallocBdevForConfigChangeCheck 00:04:28.432 19:42:22 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:28.432 19:42:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.432 19:42:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.690 19:42:22 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:28.690 19:42:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.948 INFO: shutting down applications... 00:04:28.948 19:42:23 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:28.948 19:42:23 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:28.948 19:42:23 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:28.948 19:42:23 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:28.948 19:42:23 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:29.209 Calling clear_iscsi_subsystem 00:04:29.209 Calling clear_nvmf_subsystem 00:04:29.209 Calling clear_nbd_subsystem 00:04:29.209 Calling clear_ublk_subsystem 00:04:29.209 Calling clear_vhost_blk_subsystem 00:04:29.209 Calling clear_vhost_scsi_subsystem 00:04:29.209 Calling clear_bdev_subsystem 00:04:29.209 19:42:23 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:29.209 19:42:23 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:29.209 19:42:23 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:29.209 19:42:23 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.209 19:42:23 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.209 19:42:23 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.775 19:42:23 json_config -- json_config/json_config.sh@345 -- # break 00:04:29.775 19:42:23 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:29.775 19:42:23 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:29.775 19:42:23 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.775 19:42:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.775 19:42:23 json_config -- json_config/common.sh@35 -- # [[ -n 59339 ]] 00:04:29.775 19:42:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59339 00:04:29.775 19:42:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.775 19:42:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.775 19:42:23 json_config -- json_config/common.sh@41 -- # kill -0 59339 00:04:29.775 19:42:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.342 19:42:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.342 19:42:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.342 19:42:24 json_config -- json_config/common.sh@41 -- # kill -0 59339 00:04:30.342 19:42:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.342 19:42:24 json_config -- json_config/common.sh@43 -- # break 00:04:30.342 19:42:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.342 19:42:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.342 SPDK target shutdown done 00:04:30.342 19:42:24 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:30.342 INFO: relaunching applications... 00:04:30.342 19:42:24 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.342 19:42:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.342 19:42:24 json_config -- json_config/common.sh@10 -- # shift 00:04:30.342 19:42:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.342 19:42:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.342 19:42:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.342 19:42:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.342 19:42:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.342 19:42:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59535 00:04:30.342 19:42:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.342 Waiting for target to run... 00:04:30.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.342 19:42:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.342 19:42:24 json_config -- json_config/common.sh@25 -- # waitforlisten 59535 /var/tmp/spdk_tgt.sock 00:04:30.342 19:42:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 59535 ']' 00:04:30.342 19:42:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.342 19:42:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.342 19:42:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.342 19:42:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.342 19:42:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.342 [2024-07-15 19:42:24.432024] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:30.342 [2024-07-15 19:42:24.432109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59535 ] 00:04:30.648 [2024-07-15 19:42:24.861917] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.906 [2024-07-15 19:42:24.953217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.906 [2024-07-15 19:42:25.079540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:31.165 [2024-07-15 19:42:25.293869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.165 [2024-07-15 19:42:25.325935] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.423 19:42:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.423 19:42:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:31.423 00:04:31.423 19:42:25 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.423 19:42:25 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:31.423 INFO: Checking if target configuration is the same... 00:04:31.423 19:42:25 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:31.423 19:42:25 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.423 19:42:25 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:31.423 19:42:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.423 + '[' 2 -ne 2 ']' 00:04:31.423 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:31.423 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:31.423 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:31.423 +++ basename /dev/fd/62 00:04:31.423 ++ mktemp /tmp/62.XXX 00:04:31.423 + tmp_file_1=/tmp/62.7m4 00:04:31.423 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.423 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.423 + tmp_file_2=/tmp/spdk_tgt_config.json.SCQ 00:04:31.423 + ret=0 00:04:31.423 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.681 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.681 + diff -u /tmp/62.7m4 /tmp/spdk_tgt_config.json.SCQ 00:04:31.681 INFO: JSON config files are the same 00:04:31.681 + echo 'INFO: JSON config files are the same' 00:04:31.681 + rm /tmp/62.7m4 /tmp/spdk_tgt_config.json.SCQ 00:04:31.681 + exit 0 00:04:31.940 19:42:25 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:31.940 INFO: changing configuration and checking if this can be detected... 00:04:31.940 19:42:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:31.940 19:42:25 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.940 19:42:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.200 19:42:26 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.200 19:42:26 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:32.200 19:42:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.200 + '[' 2 -ne 2 ']' 00:04:32.200 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:32.200 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:32.200 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:32.200 +++ basename /dev/fd/62 00:04:32.200 ++ mktemp /tmp/62.XXX 00:04:32.200 + tmp_file_1=/tmp/62.5WS 00:04:32.200 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.200 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.200 + tmp_file_2=/tmp/spdk_tgt_config.json.9eW 00:04:32.200 + ret=0 00:04:32.200 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.459 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.459 + diff -u /tmp/62.5WS /tmp/spdk_tgt_config.json.9eW 00:04:32.459 + ret=1 00:04:32.459 + echo '=== Start of file: /tmp/62.5WS ===' 00:04:32.459 + cat /tmp/62.5WS 00:04:32.459 + echo '=== End of file: /tmp/62.5WS ===' 00:04:32.459 + echo '' 00:04:32.459 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9eW ===' 00:04:32.459 + cat /tmp/spdk_tgt_config.json.9eW 00:04:32.459 + echo '=== End of file: /tmp/spdk_tgt_config.json.9eW ===' 00:04:32.459 + echo '' 00:04:32.459 + rm /tmp/62.5WS /tmp/spdk_tgt_config.json.9eW 00:04:32.459 + exit 1 00:04:32.459 INFO: configuration change detected. 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:32.459 19:42:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.459 19:42:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@317 -- # [[ -n 59535 ]] 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:32.459 19:42:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.459 19:42:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:32.459 19:42:26 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:32.719 19:42:26 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:32.719 19:42:26 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.719 19:42:26 json_config -- json_config/json_config.sh@323 -- # killprocess 59535 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@948 -- # '[' -z 59535 ']' 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@952 -- # kill -0 59535 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@953 -- # uname 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59535 00:04:32.719 killing process with pid 59535 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59535' 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@967 -- # kill 59535 00:04:32.719 19:42:26 json_config -- common/autotest_common.sh@972 -- # wait 59535 00:04:32.978 19:42:27 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.978 19:42:27 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:32.978 19:42:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.978 19:42:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.978 INFO: Success 00:04:32.978 19:42:27 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:32.978 19:42:27 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:32.978 00:04:32.978 real 0m8.761s 00:04:32.978 user 0m12.692s 00:04:32.978 sys 0m1.829s 00:04:32.978 ************************************ 00:04:32.978 END TEST json_config 00:04:32.978 ************************************ 00:04:32.978 19:42:27 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.978 19:42:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.978 19:42:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.978 19:42:27 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.978 19:42:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.978 19:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.978 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:32.978 ************************************ 00:04:32.978 START TEST json_config_extra_key 00:04:32.978 ************************************ 00:04:32.978 19:42:27 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.978 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.978 19:42:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.978 19:42:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.978 19:42:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.978 19:42:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.978 19:42:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.978 19:42:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.978 19:42:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.978 19:42:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.979 19:42:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.979 19:42:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:32.979 19:42:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:32.979 19:42:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:32.979 19:42:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.979 19:42:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.979 19:42:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.979 19:42:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:32.979 19:42:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:32.979 19:42:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.979 INFO: launching applications... 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.979 19:42:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59675 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.979 Waiting for target to run... 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.979 19:42:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59675 /var/tmp/spdk_tgt.sock 00:04:32.979 19:42:27 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59675 ']' 00:04:32.979 19:42:27 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.979 19:42:27 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.979 19:42:27 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.979 19:42:27 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.979 19:42:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.238 [2024-07-15 19:42:27.240971] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:33.238 [2024-07-15 19:42:27.241077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59675 ] 00:04:33.496 [2024-07-15 19:42:27.675032] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.754 [2024-07-15 19:42:27.771240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.754 [2024-07-15 19:42:27.791567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:34.012 19:42:28 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.012 00:04:34.012 19:42:28 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:34.012 19:42:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:34.012 INFO: shutting down applications... 00:04:34.012 19:42:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:34.012 19:42:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:34.012 19:42:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:34.012 19:42:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.012 19:42:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59675 ]] 00:04:34.012 19:42:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59675 00:04:34.012 19:42:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.012 19:42:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.012 19:42:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59675 00:04:34.012 19:42:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.578 19:42:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.578 19:42:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.578 19:42:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59675 00:04:34.578 19:42:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.578 19:42:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.578 SPDK target shutdown done 00:04:34.578 19:42:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.578 19:42:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.578 Success 00:04:34.578 19:42:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.578 ************************************ 00:04:34.578 END TEST json_config_extra_key 00:04:34.578 ************************************ 00:04:34.578 00:04:34.578 real 0m1.639s 00:04:34.578 user 0m1.569s 00:04:34.578 sys 0m0.426s 00:04:34.578 19:42:28 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.578 19:42:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.578 19:42:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.578 19:42:28 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.578 19:42:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.578 19:42:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.578 19:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.578 ************************************ 00:04:34.578 START TEST alias_rpc 00:04:34.578 ************************************ 00:04:34.578 19:42:28 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.854 * Looking for test storage... 00:04:34.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:34.854 19:42:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.854 19:42:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59740 00:04:34.854 19:42:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59740 00:04:34.854 19:42:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.854 19:42:28 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59740 ']' 00:04:34.854 19:42:28 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.854 19:42:28 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.854 19:42:28 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.854 19:42:28 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.854 19:42:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.854 [2024-07-15 19:42:28.923079] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:34.854 [2024-07-15 19:42:28.923175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59740 ] 00:04:34.854 [2024-07-15 19:42:29.060893] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.116 [2024-07-15 19:42:29.174538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.116 [2024-07-15 19:42:29.234327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:36.049 19:42:29 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.049 19:42:29 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:36.049 19:42:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:36.049 19:42:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59740 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59740 ']' 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59740 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59740 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.049 killing process with pid 59740 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59740' 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@967 -- # kill 59740 00:04:36.049 19:42:30 alias_rpc -- common/autotest_common.sh@972 -- # wait 59740 00:04:36.614 00:04:36.614 real 0m1.844s 00:04:36.614 user 0m2.126s 00:04:36.614 sys 0m0.433s 00:04:36.614 19:42:30 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.614 19:42:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.614 ************************************ 00:04:36.614 END TEST alias_rpc 00:04:36.614 ************************************ 00:04:36.614 19:42:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.614 19:42:30 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:36.614 19:42:30 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.614 19:42:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.614 19:42:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.614 19:42:30 -- common/autotest_common.sh@10 -- # set +x 00:04:36.614 ************************************ 00:04:36.614 START TEST spdkcli_tcp 00:04:36.614 ************************************ 00:04:36.614 19:42:30 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.614 * Looking for test storage... 00:04:36.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:36.614 19:42:30 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.614 19:42:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59816 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:36.614 19:42:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59816 00:04:36.614 19:42:30 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59816 ']' 00:04:36.614 19:42:30 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.614 19:42:30 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.614 19:42:30 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.614 19:42:30 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.614 19:42:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.614 [2024-07-15 19:42:30.834194] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:36.614 [2024-07-15 19:42:30.834317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59816 ] 00:04:36.872 [2024-07-15 19:42:30.973403] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.872 [2024-07-15 19:42:31.074954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.872 [2024-07-15 19:42:31.074964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.129 [2024-07-15 19:42:31.129741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:37.694 19:42:31 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.694 19:42:31 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:37.694 19:42:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59833 00:04:37.694 19:42:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.694 19:42:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:37.954 [ 00:04:37.954 "bdev_malloc_delete", 00:04:37.954 "bdev_malloc_create", 00:04:37.954 "bdev_null_resize", 00:04:37.954 "bdev_null_delete", 00:04:37.954 "bdev_null_create", 00:04:37.954 "bdev_nvme_cuse_unregister", 00:04:37.954 "bdev_nvme_cuse_register", 00:04:37.954 "bdev_opal_new_user", 00:04:37.954 "bdev_opal_set_lock_state", 00:04:37.954 "bdev_opal_delete", 00:04:37.954 "bdev_opal_get_info", 00:04:37.954 "bdev_opal_create", 00:04:37.954 "bdev_nvme_opal_revert", 00:04:37.954 "bdev_nvme_opal_init", 00:04:37.954 "bdev_nvme_send_cmd", 00:04:37.954 "bdev_nvme_get_path_iostat", 00:04:37.954 "bdev_nvme_get_mdns_discovery_info", 00:04:37.954 "bdev_nvme_stop_mdns_discovery", 00:04:37.954 "bdev_nvme_start_mdns_discovery", 00:04:37.954 "bdev_nvme_set_multipath_policy", 00:04:37.954 "bdev_nvme_set_preferred_path", 00:04:37.954 "bdev_nvme_get_io_paths", 00:04:37.954 "bdev_nvme_remove_error_injection", 00:04:37.954 "bdev_nvme_add_error_injection", 00:04:37.954 "bdev_nvme_get_discovery_info", 00:04:37.954 "bdev_nvme_stop_discovery", 00:04:37.954 "bdev_nvme_start_discovery", 00:04:37.954 "bdev_nvme_get_controller_health_info", 00:04:37.954 "bdev_nvme_disable_controller", 00:04:37.954 "bdev_nvme_enable_controller", 00:04:37.954 "bdev_nvme_reset_controller", 00:04:37.954 "bdev_nvme_get_transport_statistics", 00:04:37.954 "bdev_nvme_apply_firmware", 00:04:37.954 "bdev_nvme_detach_controller", 00:04:37.954 "bdev_nvme_get_controllers", 00:04:37.954 "bdev_nvme_attach_controller", 00:04:37.954 "bdev_nvme_set_hotplug", 00:04:37.954 "bdev_nvme_set_options", 00:04:37.954 "bdev_passthru_delete", 00:04:37.954 "bdev_passthru_create", 00:04:37.954 "bdev_lvol_set_parent_bdev", 00:04:37.954 "bdev_lvol_set_parent", 00:04:37.954 "bdev_lvol_check_shallow_copy", 00:04:37.954 "bdev_lvol_start_shallow_copy", 00:04:37.954 "bdev_lvol_grow_lvstore", 00:04:37.954 "bdev_lvol_get_lvols", 00:04:37.954 "bdev_lvol_get_lvstores", 00:04:37.954 "bdev_lvol_delete", 00:04:37.954 "bdev_lvol_set_read_only", 00:04:37.954 "bdev_lvol_resize", 00:04:37.954 "bdev_lvol_decouple_parent", 00:04:37.954 "bdev_lvol_inflate", 00:04:37.954 "bdev_lvol_rename", 00:04:37.954 "bdev_lvol_clone_bdev", 00:04:37.954 "bdev_lvol_clone", 00:04:37.954 "bdev_lvol_snapshot", 00:04:37.954 "bdev_lvol_create", 00:04:37.954 "bdev_lvol_delete_lvstore", 00:04:37.954 "bdev_lvol_rename_lvstore", 00:04:37.954 "bdev_lvol_create_lvstore", 00:04:37.954 "bdev_raid_set_options", 00:04:37.954 "bdev_raid_remove_base_bdev", 00:04:37.954 "bdev_raid_add_base_bdev", 00:04:37.954 "bdev_raid_delete", 00:04:37.954 "bdev_raid_create", 00:04:37.954 "bdev_raid_get_bdevs", 00:04:37.954 "bdev_error_inject_error", 00:04:37.954 "bdev_error_delete", 00:04:37.954 "bdev_error_create", 00:04:37.954 "bdev_split_delete", 00:04:37.954 "bdev_split_create", 00:04:37.954 "bdev_delay_delete", 00:04:37.954 "bdev_delay_create", 00:04:37.954 "bdev_delay_update_latency", 00:04:37.954 "bdev_zone_block_delete", 00:04:37.954 "bdev_zone_block_create", 00:04:37.954 "blobfs_create", 00:04:37.954 "blobfs_detect", 00:04:37.954 "blobfs_set_cache_size", 00:04:37.954 "bdev_aio_delete", 00:04:37.954 "bdev_aio_rescan", 00:04:37.954 "bdev_aio_create", 00:04:37.954 "bdev_ftl_set_property", 00:04:37.954 "bdev_ftl_get_properties", 00:04:37.954 "bdev_ftl_get_stats", 00:04:37.954 "bdev_ftl_unmap", 00:04:37.954 "bdev_ftl_unload", 00:04:37.954 "bdev_ftl_delete", 00:04:37.954 "bdev_ftl_load", 00:04:37.954 "bdev_ftl_create", 00:04:37.954 "bdev_virtio_attach_controller", 00:04:37.954 "bdev_virtio_scsi_get_devices", 00:04:37.954 "bdev_virtio_detach_controller", 00:04:37.954 "bdev_virtio_blk_set_hotplug", 00:04:37.954 "bdev_iscsi_delete", 00:04:37.954 "bdev_iscsi_create", 00:04:37.954 "bdev_iscsi_set_options", 00:04:37.954 "bdev_uring_delete", 00:04:37.954 "bdev_uring_rescan", 00:04:37.954 "bdev_uring_create", 00:04:37.954 "accel_error_inject_error", 00:04:37.954 "ioat_scan_accel_module", 00:04:37.954 "dsa_scan_accel_module", 00:04:37.954 "iaa_scan_accel_module", 00:04:37.954 "keyring_file_remove_key", 00:04:37.954 "keyring_file_add_key", 00:04:37.954 "keyring_linux_set_options", 00:04:37.954 "iscsi_get_histogram", 00:04:37.954 "iscsi_enable_histogram", 00:04:37.954 "iscsi_set_options", 00:04:37.954 "iscsi_get_auth_groups", 00:04:37.954 "iscsi_auth_group_remove_secret", 00:04:37.954 "iscsi_auth_group_add_secret", 00:04:37.954 "iscsi_delete_auth_group", 00:04:37.954 "iscsi_create_auth_group", 00:04:37.954 "iscsi_set_discovery_auth", 00:04:37.954 "iscsi_get_options", 00:04:37.954 "iscsi_target_node_request_logout", 00:04:37.954 "iscsi_target_node_set_redirect", 00:04:37.954 "iscsi_target_node_set_auth", 00:04:37.954 "iscsi_target_node_add_lun", 00:04:37.954 "iscsi_get_stats", 00:04:37.954 "iscsi_get_connections", 00:04:37.954 "iscsi_portal_group_set_auth", 00:04:37.954 "iscsi_start_portal_group", 00:04:37.954 "iscsi_delete_portal_group", 00:04:37.954 "iscsi_create_portal_group", 00:04:37.954 "iscsi_get_portal_groups", 00:04:37.954 "iscsi_delete_target_node", 00:04:37.954 "iscsi_target_node_remove_pg_ig_maps", 00:04:37.954 "iscsi_target_node_add_pg_ig_maps", 00:04:37.954 "iscsi_create_target_node", 00:04:37.954 "iscsi_get_target_nodes", 00:04:37.954 "iscsi_delete_initiator_group", 00:04:37.954 "iscsi_initiator_group_remove_initiators", 00:04:37.954 "iscsi_initiator_group_add_initiators", 00:04:37.954 "iscsi_create_initiator_group", 00:04:37.954 "iscsi_get_initiator_groups", 00:04:37.954 "nvmf_set_crdt", 00:04:37.954 "nvmf_set_config", 00:04:37.954 "nvmf_set_max_subsystems", 00:04:37.954 "nvmf_stop_mdns_prr", 00:04:37.954 "nvmf_publish_mdns_prr", 00:04:37.954 "nvmf_subsystem_get_listeners", 00:04:37.954 "nvmf_subsystem_get_qpairs", 00:04:37.954 "nvmf_subsystem_get_controllers", 00:04:37.954 "nvmf_get_stats", 00:04:37.954 "nvmf_get_transports", 00:04:37.954 "nvmf_create_transport", 00:04:37.954 "nvmf_get_targets", 00:04:37.954 "nvmf_delete_target", 00:04:37.954 "nvmf_create_target", 00:04:37.954 "nvmf_subsystem_allow_any_host", 00:04:37.954 "nvmf_subsystem_remove_host", 00:04:37.954 "nvmf_subsystem_add_host", 00:04:37.954 "nvmf_ns_remove_host", 00:04:37.954 "nvmf_ns_add_host", 00:04:37.954 "nvmf_subsystem_remove_ns", 00:04:37.954 "nvmf_subsystem_add_ns", 00:04:37.954 "nvmf_subsystem_listener_set_ana_state", 00:04:37.954 "nvmf_discovery_get_referrals", 00:04:37.954 "nvmf_discovery_remove_referral", 00:04:37.954 "nvmf_discovery_add_referral", 00:04:37.954 "nvmf_subsystem_remove_listener", 00:04:37.954 "nvmf_subsystem_add_listener", 00:04:37.954 "nvmf_delete_subsystem", 00:04:37.954 "nvmf_create_subsystem", 00:04:37.954 "nvmf_get_subsystems", 00:04:37.954 "env_dpdk_get_mem_stats", 00:04:37.954 "nbd_get_disks", 00:04:37.954 "nbd_stop_disk", 00:04:37.954 "nbd_start_disk", 00:04:37.954 "ublk_recover_disk", 00:04:37.954 "ublk_get_disks", 00:04:37.954 "ublk_stop_disk", 00:04:37.954 "ublk_start_disk", 00:04:37.954 "ublk_destroy_target", 00:04:37.954 "ublk_create_target", 00:04:37.954 "virtio_blk_create_transport", 00:04:37.954 "virtio_blk_get_transports", 00:04:37.954 "vhost_controller_set_coalescing", 00:04:37.954 "vhost_get_controllers", 00:04:37.954 "vhost_delete_controller", 00:04:37.954 "vhost_create_blk_controller", 00:04:37.954 "vhost_scsi_controller_remove_target", 00:04:37.954 "vhost_scsi_controller_add_target", 00:04:37.954 "vhost_start_scsi_controller", 00:04:37.954 "vhost_create_scsi_controller", 00:04:37.954 "thread_set_cpumask", 00:04:37.954 "framework_get_governor", 00:04:37.954 "framework_get_scheduler", 00:04:37.954 "framework_set_scheduler", 00:04:37.954 "framework_get_reactors", 00:04:37.954 "thread_get_io_channels", 00:04:37.954 "thread_get_pollers", 00:04:37.954 "thread_get_stats", 00:04:37.954 "framework_monitor_context_switch", 00:04:37.954 "spdk_kill_instance", 00:04:37.954 "log_enable_timestamps", 00:04:37.954 "log_get_flags", 00:04:37.954 "log_clear_flag", 00:04:37.954 "log_set_flag", 00:04:37.954 "log_get_level", 00:04:37.954 "log_set_level", 00:04:37.954 "log_get_print_level", 00:04:37.954 "log_set_print_level", 00:04:37.954 "framework_enable_cpumask_locks", 00:04:37.954 "framework_disable_cpumask_locks", 00:04:37.954 "framework_wait_init", 00:04:37.954 "framework_start_init", 00:04:37.954 "scsi_get_devices", 00:04:37.954 "bdev_get_histogram", 00:04:37.954 "bdev_enable_histogram", 00:04:37.954 "bdev_set_qos_limit", 00:04:37.954 "bdev_set_qd_sampling_period", 00:04:37.954 "bdev_get_bdevs", 00:04:37.954 "bdev_reset_iostat", 00:04:37.954 "bdev_get_iostat", 00:04:37.954 "bdev_examine", 00:04:37.954 "bdev_wait_for_examine", 00:04:37.954 "bdev_set_options", 00:04:37.954 "notify_get_notifications", 00:04:37.954 "notify_get_types", 00:04:37.954 "accel_get_stats", 00:04:37.954 "accel_set_options", 00:04:37.954 "accel_set_driver", 00:04:37.954 "accel_crypto_key_destroy", 00:04:37.954 "accel_crypto_keys_get", 00:04:37.954 "accel_crypto_key_create", 00:04:37.954 "accel_assign_opc", 00:04:37.954 "accel_get_module_info", 00:04:37.954 "accel_get_opc_assignments", 00:04:37.954 "vmd_rescan", 00:04:37.955 "vmd_remove_device", 00:04:37.955 "vmd_enable", 00:04:37.955 "sock_get_default_impl", 00:04:37.955 "sock_set_default_impl", 00:04:37.955 "sock_impl_set_options", 00:04:37.955 "sock_impl_get_options", 00:04:37.955 "iobuf_get_stats", 00:04:37.955 "iobuf_set_options", 00:04:37.955 "framework_get_pci_devices", 00:04:37.955 "framework_get_config", 00:04:37.955 "framework_get_subsystems", 00:04:37.955 "trace_get_info", 00:04:37.955 "trace_get_tpoint_group_mask", 00:04:37.955 "trace_disable_tpoint_group", 00:04:37.955 "trace_enable_tpoint_group", 00:04:37.955 "trace_clear_tpoint_mask", 00:04:37.955 "trace_set_tpoint_mask", 00:04:37.955 "keyring_get_keys", 00:04:37.955 "spdk_get_version", 00:04:37.955 "rpc_get_methods" 00:04:37.955 ] 00:04:37.955 19:42:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.955 19:42:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:37.955 19:42:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59816 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59816 ']' 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59816 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59816 00:04:37.955 killing process with pid 59816 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59816' 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59816 00:04:37.955 19:42:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59816 00:04:38.522 ************************************ 00:04:38.522 END TEST spdkcli_tcp 00:04:38.522 ************************************ 00:04:38.522 00:04:38.522 real 0m1.869s 00:04:38.522 user 0m3.500s 00:04:38.522 sys 0m0.482s 00:04:38.522 19:42:32 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.522 19:42:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 19:42:32 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.522 19:42:32 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.522 19:42:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.522 19:42:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.522 19:42:32 -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 ************************************ 00:04:38.522 START TEST dpdk_mem_utility 00:04:38.522 ************************************ 00:04:38.522 19:42:32 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.522 * Looking for test storage... 00:04:38.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:38.522 19:42:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:38.522 19:42:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59906 00:04:38.522 19:42:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59906 00:04:38.522 19:42:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.522 19:42:32 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59906 ']' 00:04:38.522 19:42:32 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.522 19:42:32 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.522 19:42:32 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.522 19:42:32 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.522 19:42:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 [2024-07-15 19:42:32.735420] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:38.522 [2024-07-15 19:42:32.735519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:04:38.781 [2024-07-15 19:42:32.870966] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.781 [2024-07-15 19:42:32.984793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.040 [2024-07-15 19:42:33.040279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.606 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.606 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:39.606 19:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:39.606 19:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:39.606 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.606 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.606 { 00:04:39.606 "filename": "/tmp/spdk_mem_dump.txt" 00:04:39.606 } 00:04:39.606 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.606 19:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:39.866 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:39.866 1 heaps totaling size 814.000000 MiB 00:04:39.866 size: 814.000000 MiB heap id: 0 00:04:39.866 end heaps---------- 00:04:39.866 8 mempools totaling size 598.116089 MiB 00:04:39.866 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:39.866 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:39.866 size: 84.521057 MiB name: bdev_io_59906 00:04:39.866 size: 51.011292 MiB name: evtpool_59906 00:04:39.866 size: 50.003479 MiB name: msgpool_59906 00:04:39.866 size: 21.763794 MiB name: PDU_Pool 00:04:39.866 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:39.866 size: 0.026123 MiB name: Session_Pool 00:04:39.866 end mempools------- 00:04:39.866 6 memzones totaling size 4.142822 MiB 00:04:39.866 size: 1.000366 MiB name: RG_ring_0_59906 00:04:39.866 size: 1.000366 MiB name: RG_ring_1_59906 00:04:39.866 size: 1.000366 MiB name: RG_ring_4_59906 00:04:39.866 size: 1.000366 MiB name: RG_ring_5_59906 00:04:39.866 size: 0.125366 MiB name: RG_ring_2_59906 00:04:39.866 size: 0.015991 MiB name: RG_ring_3_59906 00:04:39.866 end memzones------- 00:04:39.866 19:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:39.866 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:04:39.866 list of free elements. size: 12.471375 MiB 00:04:39.866 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:39.866 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:39.866 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:39.866 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:39.866 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:39.866 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:39.866 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:39.866 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:39.866 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:39.866 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:04:39.866 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:39.866 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:39.866 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:39.866 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:39.866 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:39.866 list of standard malloc elements. size: 199.266052 MiB 00:04:39.866 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:39.866 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:39.866 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:39.866 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:39.866 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:39.866 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:39.866 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:39.866 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:39.866 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:39.866 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:39.866 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:39.867 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:39.868 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:39.868 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:39.868 list of memzone associated elements. size: 602.262573 MiB 00:04:39.868 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:39.868 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:39.868 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:39.868 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:39.868 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:39.868 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59906_0 00:04:39.868 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:39.868 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59906_0 00:04:39.868 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:39.868 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59906_0 00:04:39.868 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:39.868 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:39.868 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:39.868 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:39.868 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:39.868 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59906 00:04:39.868 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:39.868 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59906 00:04:39.868 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:39.868 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59906 00:04:39.868 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:39.868 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:39.869 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:39.869 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:39.869 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:39.869 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:39.869 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:39.869 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:39.869 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:39.869 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59906 00:04:39.869 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:39.869 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59906 00:04:39.869 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:39.869 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59906 00:04:39.869 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:39.869 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59906 00:04:39.869 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:39.869 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59906 00:04:39.869 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:39.869 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:39.869 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:39.869 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:39.869 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:39.869 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:39.869 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:39.869 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59906 00:04:39.869 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:39.869 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:39.869 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:39.869 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:39.869 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:39.869 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59906 00:04:39.869 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:39.869 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:39.869 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:39.869 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59906 00:04:39.869 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:39.869 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59906 00:04:39.869 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:39.869 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:39.869 19:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:39.869 19:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59906 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59906 ']' 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59906 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59906 00:04:39.869 killing process with pid 59906 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59906' 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59906 00:04:39.869 19:42:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59906 00:04:40.128 00:04:40.128 real 0m1.745s 00:04:40.128 user 0m1.942s 00:04:40.128 sys 0m0.423s 00:04:40.128 19:42:34 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.128 19:42:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.128 ************************************ 00:04:40.128 END TEST dpdk_mem_utility 00:04:40.128 ************************************ 00:04:40.387 19:42:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.387 19:42:34 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:40.387 19:42:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.387 19:42:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.387 19:42:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.387 ************************************ 00:04:40.387 START TEST event 00:04:40.387 ************************************ 00:04:40.387 19:42:34 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:40.387 * Looking for test storage... 00:04:40.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:40.387 19:42:34 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:40.387 19:42:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:40.387 19:42:34 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.387 19:42:34 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:40.387 19:42:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.387 19:42:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.387 ************************************ 00:04:40.387 START TEST event_perf 00:04:40.387 ************************************ 00:04:40.387 19:42:34 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.387 Running I/O for 1 seconds...[2024-07-15 19:42:34.500565] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:40.387 [2024-07-15 19:42:34.500928] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59979 ] 00:04:40.645 [2024-07-15 19:42:34.640115] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.645 [2024-07-15 19:42:34.764450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.645 [2024-07-15 19:42:34.764613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.645 [2024-07-15 19:42:34.764720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.645 [2024-07-15 19:42:34.765534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.020 Running I/O for 1 seconds... 00:04:42.020 lcore 0: 182067 00:04:42.020 lcore 1: 182068 00:04:42.020 lcore 2: 182068 00:04:42.020 lcore 3: 182067 00:04:42.020 done. 00:04:42.020 00:04:42.020 real 0m1.371s 00:04:42.020 ************************************ 00:04:42.020 END TEST event_perf 00:04:42.020 ************************************ 00:04:42.020 user 0m4.180s 00:04:42.020 sys 0m0.068s 00:04:42.020 19:42:35 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.020 19:42:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.020 19:42:35 event -- common/autotest_common.sh@1142 -- # return 0 00:04:42.020 19:42:35 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:42.020 19:42:35 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:42.020 19:42:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.020 19:42:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.020 ************************************ 00:04:42.020 START TEST event_reactor 00:04:42.020 ************************************ 00:04:42.020 19:42:35 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:42.020 [2024-07-15 19:42:35.925629] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:42.020 [2024-07-15 19:42:35.925747] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60017 ] 00:04:42.020 [2024-07-15 19:42:36.066835] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.020 [2024-07-15 19:42:36.198227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.396 test_start 00:04:43.396 oneshot 00:04:43.396 tick 100 00:04:43.396 tick 100 00:04:43.396 tick 250 00:04:43.396 tick 100 00:04:43.396 tick 100 00:04:43.396 tick 250 00:04:43.396 tick 500 00:04:43.396 tick 100 00:04:43.396 tick 100 00:04:43.396 tick 100 00:04:43.396 tick 250 00:04:43.396 tick 100 00:04:43.396 tick 100 00:04:43.396 test_end 00:04:43.396 00:04:43.396 real 0m1.376s 00:04:43.396 user 0m1.198s 00:04:43.396 sys 0m0.071s 00:04:43.396 19:42:37 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.396 19:42:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:43.396 ************************************ 00:04:43.396 END TEST event_reactor 00:04:43.396 ************************************ 00:04:43.396 19:42:37 event -- common/autotest_common.sh@1142 -- # return 0 00:04:43.396 19:42:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.396 19:42:37 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:43.397 19:42:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.397 19:42:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.397 ************************************ 00:04:43.397 START TEST event_reactor_perf 00:04:43.397 ************************************ 00:04:43.397 19:42:37 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.397 [2024-07-15 19:42:37.349937] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:43.397 [2024-07-15 19:42:37.350024] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60047 ] 00:04:43.397 [2024-07-15 19:42:37.485213] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.397 [2024-07-15 19:42:37.607866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.787 test_start 00:04:44.787 test_end 00:04:44.787 Performance: 360798 events per second 00:04:44.787 00:04:44.787 real 0m1.355s 00:04:44.787 user 0m1.195s 00:04:44.787 sys 0m0.054s 00:04:44.787 19:42:38 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.787 19:42:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.787 ************************************ 00:04:44.787 END TEST event_reactor_perf 00:04:44.787 ************************************ 00:04:44.787 19:42:38 event -- common/autotest_common.sh@1142 -- # return 0 00:04:44.787 19:42:38 event -- event/event.sh@49 -- # uname -s 00:04:44.787 19:42:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:44.787 19:42:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:44.787 19:42:38 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.787 19:42:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.787 19:42:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.787 ************************************ 00:04:44.787 START TEST event_scheduler 00:04:44.787 ************************************ 00:04:44.787 19:42:38 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:44.787 * Looking for test storage... 00:04:44.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:44.787 19:42:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:44.787 19:42:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60109 00:04:44.787 19:42:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.787 19:42:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60109 00:04:44.787 19:42:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:44.787 19:42:38 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60109 ']' 00:04:44.787 19:42:38 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.787 19:42:38 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.787 19:42:38 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.787 19:42:38 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.787 19:42:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.787 [2024-07-15 19:42:38.886641] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:44.787 [2024-07-15 19:42:38.886752] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60109 ] 00:04:44.787 [2024-07-15 19:42:39.027570] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.045 [2024-07-15 19:42:39.148213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.045 [2024-07-15 19:42:39.148338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.045 [2024-07-15 19:42:39.148410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.045 [2024-07-15 19:42:39.148411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:45.979 19:42:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.979 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:45.979 POWER: Cannot set governor of lcore 0 to userspace 00:04:45.979 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:45.979 POWER: Cannot set governor of lcore 0 to performance 00:04:45.979 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:45.979 POWER: Cannot set governor of lcore 0 to userspace 00:04:45.979 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:45.979 POWER: Cannot set governor of lcore 0 to userspace 00:04:45.979 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:45.979 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:45.979 POWER: Unable to set Power Management Environment for lcore 0 00:04:45.979 [2024-07-15 19:42:39.889847] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:45.979 [2024-07-15 19:42:39.889861] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:45.979 [2024-07-15 19:42:39.889870] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:45.979 [2024-07-15 19:42:39.889883] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:45.979 [2024-07-15 19:42:39.889891] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:45.979 [2024-07-15 19:42:39.889898] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.979 19:42:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.979 [2024-07-15 19:42:39.951551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:45.979 [2024-07-15 19:42:39.987009] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.979 19:42:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.979 19:42:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.979 ************************************ 00:04:45.979 START TEST scheduler_create_thread 00:04:45.979 ************************************ 00:04:45.979 19:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:45.979 19:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:45.979 19:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.979 19:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.979 2 00:04:45.979 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.979 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:45.979 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.979 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.979 3 00:04:45.979 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.979 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:45.979 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.979 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.979 4 00:04:45.979 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.980 5 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.980 6 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.980 7 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.980 8 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.980 9 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.980 10 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.980 19:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.363 19:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.363 19:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:47.363 19:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:47.363 19:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.363 19:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.746 19:42:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.746 00:04:48.746 real 0m2.613s 00:04:48.746 user 0m0.017s 00:04:48.746 sys 0m0.008s 00:04:48.746 19:42:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.746 19:42:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.746 ************************************ 00:04:48.746 END TEST scheduler_create_thread 00:04:48.746 ************************************ 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:48.746 19:42:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:48.746 19:42:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60109 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60109 ']' 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60109 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60109 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:48.746 killing process with pid 60109 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60109' 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60109 00:04:48.746 19:42:42 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60109 00:04:49.021 [2024-07-15 19:42:43.091728] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.280 00:04:49.280 real 0m4.575s 00:04:49.280 user 0m8.654s 00:04:49.280 sys 0m0.367s 00:04:49.280 19:42:43 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.280 ************************************ 00:04:49.280 END TEST event_scheduler 00:04:49.280 ************************************ 00:04:49.280 19:42:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.280 19:42:43 event -- common/autotest_common.sh@1142 -- # return 0 00:04:49.280 19:42:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.281 19:42:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.281 19:42:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.281 19:42:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.281 19:42:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.281 ************************************ 00:04:49.281 START TEST app_repeat 00:04:49.281 ************************************ 00:04:49.281 19:42:43 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60208 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.281 Process app_repeat pid: 60208 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60208' 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.281 spdk_app_start Round 0 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.281 19:42:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60208 /var/tmp/spdk-nbd.sock 00:04:49.281 19:42:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60208 ']' 00:04:49.281 19:42:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.281 19:42:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.281 19:42:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.281 19:42:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.281 19:42:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.281 [2024-07-15 19:42:43.404958] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:49.281 [2024-07-15 19:42:43.405037] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60208 ] 00:04:49.539 [2024-07-15 19:42:43.540252] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.539 [2024-07-15 19:42:43.694745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.539 [2024-07-15 19:42:43.694761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.539 [2024-07-15 19:42:43.754518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:50.474 19:42:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.474 19:42:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:50.474 19:42:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.474 Malloc0 00:04:50.474 19:42:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.739 Malloc1 00:04:50.739 19:42:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.739 19:42:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.996 /dev/nbd0 00:04:50.996 19:42:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.996 19:42:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.996 1+0 records in 00:04:50.996 1+0 records out 00:04:50.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027301 s, 15.0 MB/s 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:50.996 19:42:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:50.996 19:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.996 19:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.996 19:42:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.254 /dev/nbd1 00:04:51.254 19:42:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.254 19:42:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.254 19:42:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:51.254 19:42:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:51.254 19:42:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:51.254 19:42:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:51.254 19:42:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:51.254 19:42:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:51.254 19:42:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:51.254 19:42:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:51.254 19:42:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.254 1+0 records in 00:04:51.254 1+0 records out 00:04:51.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003451 s, 11.9 MB/s 00:04:51.511 19:42:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.512 19:42:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:51.512 19:42:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.512 19:42:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:51.512 19:42:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:51.512 19:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.512 19:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.512 19:42:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.512 19:42:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.512 19:42:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.770 { 00:04:51.770 "nbd_device": "/dev/nbd0", 00:04:51.770 "bdev_name": "Malloc0" 00:04:51.770 }, 00:04:51.770 { 00:04:51.770 "nbd_device": "/dev/nbd1", 00:04:51.770 "bdev_name": "Malloc1" 00:04:51.770 } 00:04:51.770 ]' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.770 { 00:04:51.770 "nbd_device": "/dev/nbd0", 00:04:51.770 "bdev_name": "Malloc0" 00:04:51.770 }, 00:04:51.770 { 00:04:51.770 "nbd_device": "/dev/nbd1", 00:04:51.770 "bdev_name": "Malloc1" 00:04:51.770 } 00:04:51.770 ]' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.770 /dev/nbd1' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.770 /dev/nbd1' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.770 256+0 records in 00:04:51.770 256+0 records out 00:04:51.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103835 s, 101 MB/s 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.770 256+0 records in 00:04:51.770 256+0 records out 00:04:51.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202039 s, 51.9 MB/s 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.770 256+0 records in 00:04:51.770 256+0 records out 00:04:51.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301459 s, 34.8 MB/s 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.770 19:42:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.028 19:42:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.286 19:42:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.558 19:42:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.558 19:42:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.815 19:42:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.380 [2024-07-15 19:42:47.327568] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.380 [2024-07-15 19:42:47.453461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.380 [2024-07-15 19:42:47.453477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.380 [2024-07-15 19:42:47.513313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:53.380 [2024-07-15 19:42:47.513392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.380 [2024-07-15 19:42:47.513406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.906 spdk_app_start Round 1 00:04:55.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.906 19:42:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.906 19:42:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:55.906 19:42:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60208 /var/tmp/spdk-nbd.sock 00:04:55.906 19:42:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60208 ']' 00:04:55.906 19:42:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.906 19:42:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.906 19:42:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.906 19:42:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.906 19:42:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.164 19:42:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.164 19:42:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:56.164 19:42:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.421 Malloc0 00:04:56.421 19:42:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.679 Malloc1 00:04:56.679 19:42:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.679 19:42:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.938 /dev/nbd0 00:04:56.938 19:42:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.938 19:42:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.938 1+0 records in 00:04:56.938 1+0 records out 00:04:56.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035682 s, 11.5 MB/s 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:56.938 19:42:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:56.938 19:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.938 19:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.938 19:42:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.196 /dev/nbd1 00:04:57.196 19:42:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.196 19:42:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.196 1+0 records in 00:04:57.196 1+0 records out 00:04:57.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311053 s, 13.2 MB/s 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:57.196 19:42:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:57.196 19:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.196 19:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.196 19:42:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.196 19:42:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.196 19:42:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.454 { 00:04:57.454 "nbd_device": "/dev/nbd0", 00:04:57.454 "bdev_name": "Malloc0" 00:04:57.454 }, 00:04:57.454 { 00:04:57.454 "nbd_device": "/dev/nbd1", 00:04:57.454 "bdev_name": "Malloc1" 00:04:57.454 } 00:04:57.454 ]' 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.454 { 00:04:57.454 "nbd_device": "/dev/nbd0", 00:04:57.454 "bdev_name": "Malloc0" 00:04:57.454 }, 00:04:57.454 { 00:04:57.454 "nbd_device": "/dev/nbd1", 00:04:57.454 "bdev_name": "Malloc1" 00:04:57.454 } 00:04:57.454 ]' 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.454 /dev/nbd1' 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.454 /dev/nbd1' 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.454 256+0 records in 00:04:57.454 256+0 records out 00:04:57.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646439 s, 162 MB/s 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.454 256+0 records in 00:04:57.454 256+0 records out 00:04:57.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242468 s, 43.2 MB/s 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.454 19:42:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.712 256+0 records in 00:04:57.712 256+0 records out 00:04:57.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256939 s, 40.8 MB/s 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.712 19:42:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.970 19:42:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.970 19:42:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.970 19:42:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.970 19:42:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.970 19:42:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.970 19:42:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.970 19:42:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.970 19:42:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.970 19:42:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.970 19:42:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.238 19:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.496 19:42:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.496 19:42:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.755 19:42:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.014 [2024-07-15 19:42:53.241195] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.275 [2024-07-15 19:42:53.374350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.275 [2024-07-15 19:42:53.374360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.275 [2024-07-15 19:42:53.458347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:59.275 [2024-07-15 19:42:53.458465] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.275 [2024-07-15 19:42:53.458480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.804 spdk_app_start Round 2 00:05:01.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.804 19:42:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.804 19:42:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:01.804 19:42:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60208 /var/tmp/spdk-nbd.sock 00:05:01.804 19:42:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60208 ']' 00:05:01.804 19:42:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.804 19:42:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.804 19:42:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.804 19:42:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.804 19:42:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.066 19:42:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.066 19:42:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:02.066 19:42:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.323 Malloc0 00:05:02.323 19:42:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.579 Malloc1 00:05:02.579 19:42:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.579 19:42:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.837 /dev/nbd0 00:05:02.837 19:42:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.837 19:42:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.837 1+0 records in 00:05:02.837 1+0 records out 00:05:02.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029968 s, 13.7 MB/s 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.837 19:42:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:02.837 19:42:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.837 19:42:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.837 19:42:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.094 /dev/nbd1 00:05:03.094 19:42:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.094 19:42:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.094 1+0 records in 00:05:03.094 1+0 records out 00:05:03.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283055 s, 14.5 MB/s 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:03.094 19:42:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:03.094 19:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.094 19:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.094 19:42:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.094 19:42:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.094 19:42:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.352 { 00:05:03.352 "nbd_device": "/dev/nbd0", 00:05:03.352 "bdev_name": "Malloc0" 00:05:03.352 }, 00:05:03.352 { 00:05:03.352 "nbd_device": "/dev/nbd1", 00:05:03.352 "bdev_name": "Malloc1" 00:05:03.352 } 00:05:03.352 ]' 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.352 { 00:05:03.352 "nbd_device": "/dev/nbd0", 00:05:03.352 "bdev_name": "Malloc0" 00:05:03.352 }, 00:05:03.352 { 00:05:03.352 "nbd_device": "/dev/nbd1", 00:05:03.352 "bdev_name": "Malloc1" 00:05:03.352 } 00:05:03.352 ]' 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.352 /dev/nbd1' 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.352 /dev/nbd1' 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.352 19:42:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.609 256+0 records in 00:05:03.609 256+0 records out 00:05:03.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00814257 s, 129 MB/s 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.609 256+0 records in 00:05:03.609 256+0 records out 00:05:03.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262145 s, 40.0 MB/s 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.609 256+0 records in 00:05:03.609 256+0 records out 00:05:03.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321566 s, 32.6 MB/s 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.609 19:42:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.610 19:42:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.610 19:42:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.610 19:42:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.610 19:42:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.866 19:42:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.123 19:42:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.380 19:42:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.380 19:42:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.637 19:42:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.894 [2024-07-15 19:42:59.113574] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.151 [2024-07-15 19:42:59.238791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.151 [2024-07-15 19:42:59.238814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.151 [2024-07-15 19:42:59.313433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:05.151 [2024-07-15 19:42:59.313551] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.151 [2024-07-15 19:42:59.313567] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.704 19:43:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60208 /var/tmp/spdk-nbd.sock 00:05:07.704 19:43:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60208 ']' 00:05:07.704 19:43:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.704 19:43:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.704 19:43:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.704 19:43:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.704 19:43:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:07.963 19:43:02 event.app_repeat -- event/event.sh@39 -- # killprocess 60208 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60208 ']' 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60208 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60208 00:05:07.963 killing process with pid 60208 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60208' 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60208 00:05:07.963 19:43:02 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60208 00:05:08.220 spdk_app_start is called in Round 0. 00:05:08.220 Shutdown signal received, stop current app iteration 00:05:08.220 Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 reinitialization... 00:05:08.220 spdk_app_start is called in Round 1. 00:05:08.220 Shutdown signal received, stop current app iteration 00:05:08.220 Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 reinitialization... 00:05:08.220 spdk_app_start is called in Round 2. 00:05:08.221 Shutdown signal received, stop current app iteration 00:05:08.221 Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 reinitialization... 00:05:08.221 spdk_app_start is called in Round 3. 00:05:08.221 Shutdown signal received, stop current app iteration 00:05:08.221 19:43:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:08.221 19:43:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:08.221 00:05:08.221 real 0m18.943s 00:05:08.221 user 0m41.890s 00:05:08.221 sys 0m3.075s 00:05:08.221 19:43:02 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.221 19:43:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.221 ************************************ 00:05:08.221 END TEST app_repeat 00:05:08.221 ************************************ 00:05:08.221 19:43:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:08.221 19:43:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:08.221 19:43:02 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:08.221 19:43:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.221 19:43:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.221 19:43:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.221 ************************************ 00:05:08.221 START TEST cpu_locks 00:05:08.221 ************************************ 00:05:08.221 19:43:02 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:08.221 * Looking for test storage... 00:05:08.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.221 19:43:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:08.221 19:43:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:08.221 19:43:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:08.221 19:43:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:08.221 19:43:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.221 19:43:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.221 19:43:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.221 ************************************ 00:05:08.221 START TEST default_locks 00:05:08.221 ************************************ 00:05:08.221 19:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:08.221 19:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60641 00:05:08.221 19:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.221 19:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60641 00:05:08.221 19:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60641 ']' 00:05:08.221 19:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.221 19:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.221 19:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.479 19:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.479 19:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.479 [2024-07-15 19:43:02.527185] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:08.479 [2024-07-15 19:43:02.528146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60641 ] 00:05:08.479 [2024-07-15 19:43:02.665966] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.737 [2024-07-15 19:43:02.775494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.738 [2024-07-15 19:43:02.828324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:09.305 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.305 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:09.305 19:43:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60641 00:05:09.305 19:43:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60641 00:05:09.305 19:43:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60641 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60641 ']' 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60641 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60641 00:05:09.871 killing process with pid 60641 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60641' 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60641 00:05:09.871 19:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60641 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60641 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60641 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:10.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.129 ERROR: process (pid: 60641) is no longer running 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60641 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60641 ']' 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.129 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60641) - No such process 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.129 ************************************ 00:05:10.129 END TEST default_locks 00:05:10.129 ************************************ 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.129 00:05:10.129 real 0m1.845s 00:05:10.129 user 0m1.993s 00:05:10.129 sys 0m0.547s 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.129 19:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.129 19:43:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:10.129 19:43:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:10.129 19:43:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.129 19:43:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.129 19:43:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.129 ************************************ 00:05:10.129 START TEST default_locks_via_rpc 00:05:10.129 ************************************ 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60693 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60693 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60693 ']' 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.129 19:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.387 [2024-07-15 19:43:04.420638] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:10.387 [2024-07-15 19:43:04.420724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60693 ] 00:05:10.387 [2024-07-15 19:43:04.552467] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.645 [2024-07-15 19:43:04.668593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.645 [2024-07-15 19:43:04.723938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60693 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60693 00:05:11.210 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.778 19:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60693 00:05:11.778 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60693 ']' 00:05:11.778 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60693 00:05:11.778 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:11.778 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.778 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60693 00:05:11.778 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.778 killing process with pid 60693 00:05:11.778 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.779 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60693' 00:05:11.779 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60693 00:05:11.779 19:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60693 00:05:12.037 00:05:12.037 real 0m1.793s 00:05:12.037 user 0m1.934s 00:05:12.037 sys 0m0.513s 00:05:12.037 ************************************ 00:05:12.037 END TEST default_locks_via_rpc 00:05:12.037 ************************************ 00:05:12.037 19:43:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.037 19:43:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.037 19:43:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:12.037 19:43:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:12.037 19:43:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.037 19:43:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.037 19:43:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.037 ************************************ 00:05:12.037 START TEST non_locking_app_on_locked_coremask 00:05:12.037 ************************************ 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60744 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60744 /var/tmp/spdk.sock 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60744 ']' 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.037 19:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.037 [2024-07-15 19:43:06.277894] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:12.037 [2024-07-15 19:43:06.278016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60744 ] 00:05:12.296 [2024-07-15 19:43:06.418421] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.296 [2024-07-15 19:43:06.537199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.555 [2024-07-15 19:43:06.592927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60760 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60760 /var/tmp/spdk2.sock 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60760 ']' 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.134 19:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.134 [2024-07-15 19:43:07.330889] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:13.134 [2024-07-15 19:43:07.331553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60760 ] 00:05:13.391 [2024-07-15 19:43:07.477353] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.391 [2024-07-15 19:43:07.477422] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.649 [2024-07-15 19:43:07.696083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.649 [2024-07-15 19:43:07.809113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.216 19:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.216 19:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:14.216 19:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60744 00:05:14.216 19:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.216 19:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60744 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60744 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60744 ']' 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60744 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60744 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.155 killing process with pid 60744 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60744' 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60744 00:05:15.155 19:43:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60744 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60760 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60760 ']' 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60760 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60760 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.100 killing process with pid 60760 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60760' 00:05:16.100 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60760 00:05:16.101 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60760 00:05:16.359 00:05:16.359 real 0m4.274s 00:05:16.359 user 0m4.756s 00:05:16.359 sys 0m1.208s 00:05:16.359 ************************************ 00:05:16.359 END TEST non_locking_app_on_locked_coremask 00:05:16.359 ************************************ 00:05:16.359 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.359 19:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.359 19:43:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:16.359 19:43:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:16.359 19:43:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.359 19:43:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.359 19:43:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.359 ************************************ 00:05:16.359 START TEST locking_app_on_unlocked_coremask 00:05:16.359 ************************************ 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60827 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60827 /var/tmp/spdk.sock 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60827 ']' 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:16.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.359 19:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.359 [2024-07-15 19:43:10.600459] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:16.359 [2024-07-15 19:43:10.600635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60827 ] 00:05:16.617 [2024-07-15 19:43:10.738864] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.617 [2024-07-15 19:43:10.738938] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.617 [2024-07-15 19:43:10.851468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.875 [2024-07-15 19:43:10.906087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60843 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60843 /var/tmp/spdk2.sock 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60843 ']' 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.441 19:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.441 [2024-07-15 19:43:11.608874] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:17.441 [2024-07-15 19:43:11.609004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60843 ] 00:05:17.699 [2024-07-15 19:43:11.753757] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.957 [2024-07-15 19:43:11.971000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.957 [2024-07-15 19:43:12.082873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:18.524 19:43:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.524 19:43:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:18.524 19:43:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60843 00:05:18.524 19:43:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60843 00:05:18.524 19:43:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60827 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60827 ']' 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60827 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60827 00:05:19.215 killing process with pid 60827 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60827' 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60827 00:05:19.215 19:43:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60827 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60843 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60843 ']' 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60843 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60843 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.162 killing process with pid 60843 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60843' 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60843 00:05:20.162 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60843 00:05:20.420 00:05:20.420 real 0m4.061s 00:05:20.420 user 0m4.513s 00:05:20.420 sys 0m1.088s 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.420 ************************************ 00:05:20.420 END TEST locking_app_on_unlocked_coremask 00:05:20.420 ************************************ 00:05:20.420 19:43:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:20.420 19:43:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:20.420 19:43:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.420 19:43:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.420 19:43:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.420 ************************************ 00:05:20.420 START TEST locking_app_on_locked_coremask 00:05:20.420 ************************************ 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60910 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60910 /var/tmp/spdk.sock 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60910 ']' 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.420 19:43:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.678 [2024-07-15 19:43:14.706496] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:20.678 [2024-07-15 19:43:14.706614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60910 ] 00:05:20.678 [2024-07-15 19:43:14.842016] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.936 [2024-07-15 19:43:14.955581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.936 [2024-07-15 19:43:15.009598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60926 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60926 /var/tmp/spdk2.sock 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60926 /var/tmp/spdk2.sock 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60926 /var/tmp/spdk2.sock 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60926 ']' 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.501 19:43:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.501 [2024-07-15 19:43:15.721786] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:21.501 [2024-07-15 19:43:15.722370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60926 ] 00:05:21.758 [2024-07-15 19:43:15.857234] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60910 has claimed it. 00:05:21.758 [2024-07-15 19:43:15.864377] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.322 ERROR: process (pid: 60926) is no longer running 00:05:22.322 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60926) - No such process 00:05:22.322 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.322 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:22.322 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:22.322 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.322 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.322 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.322 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60910 00:05:22.322 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60910 00:05:22.322 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60910 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60910 ']' 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60910 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60910 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.887 killing process with pid 60910 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60910' 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60910 00:05:22.887 19:43:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60910 00:05:23.144 00:05:23.144 real 0m2.617s 00:05:23.144 user 0m2.988s 00:05:23.144 sys 0m0.644s 00:05:23.144 19:43:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.144 19:43:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.144 ************************************ 00:05:23.144 END TEST locking_app_on_locked_coremask 00:05:23.144 ************************************ 00:05:23.144 19:43:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:23.144 19:43:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:23.144 19:43:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.144 19:43:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.144 19:43:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.144 ************************************ 00:05:23.144 START TEST locking_overlapped_coremask 00:05:23.144 ************************************ 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60972 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60972 /var/tmp/spdk.sock 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60972 ']' 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.144 19:43:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.144 [2024-07-15 19:43:17.387891] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:23.144 [2024-07-15 19:43:17.388017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60972 ] 00:05:23.402 [2024-07-15 19:43:17.525853] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.661 [2024-07-15 19:43:17.646706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.661 [2024-07-15 19:43:17.646876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.661 [2024-07-15 19:43:17.646883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.661 [2024-07-15 19:43:17.704752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60990 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60990 /var/tmp/spdk2.sock 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60990 /var/tmp/spdk2.sock 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60990 /var/tmp/spdk2.sock 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60990 ']' 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.228 19:43:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.228 [2024-07-15 19:43:18.454728] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:24.228 [2024-07-15 19:43:18.454861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60990 ] 00:05:24.487 [2024-07-15 19:43:18.604652] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60972 has claimed it. 00:05:24.487 [2024-07-15 19:43:18.604734] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:25.054 ERROR: process (pid: 60990) is no longer running 00:05:25.054 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60990) - No such process 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60972 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60972 ']' 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60972 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60972 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.054 killing process with pid 60972 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60972' 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60972 00:05:25.054 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60972 00:05:25.622 00:05:25.622 real 0m2.262s 00:05:25.622 user 0m6.256s 00:05:25.622 sys 0m0.463s 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.622 ************************************ 00:05:25.622 END TEST locking_overlapped_coremask 00:05:25.622 ************************************ 00:05:25.622 19:43:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:25.622 19:43:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:25.622 19:43:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.622 19:43:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.622 19:43:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.622 ************************************ 00:05:25.622 START TEST locking_overlapped_coremask_via_rpc 00:05:25.622 ************************************ 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:25.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61035 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61035 /var/tmp/spdk.sock 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61035 ']' 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.622 19:43:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.622 [2024-07-15 19:43:19.689778] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:25.622 [2024-07-15 19:43:19.689850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61035 ] 00:05:25.622 [2024-07-15 19:43:19.822895] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.622 [2024-07-15 19:43:19.822956] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.887 [2024-07-15 19:43:19.930891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.887 [2024-07-15 19:43:19.931030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.887 [2024-07-15 19:43:19.931033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.887 [2024-07-15 19:43:19.986811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.455 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.455 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:26.455 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61053 00:05:26.455 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:26.455 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61053 /var/tmp/spdk2.sock 00:05:26.455 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61053 ']' 00:05:26.455 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.455 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.456 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.456 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.456 19:43:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.715 [2024-07-15 19:43:20.727969] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:26.715 [2024-07-15 19:43:20.728577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61053 ] 00:05:26.715 [2024-07-15 19:43:20.879573] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.715 [2024-07-15 19:43:20.879652] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.974 [2024-07-15 19:43:21.090569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.974 [2024-07-15 19:43:21.094339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:26.974 [2024-07-15 19:43:21.094342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.974 [2024-07-15 19:43:21.206417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.543 [2024-07-15 19:43:21.703385] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61035 has claimed it. 00:05:27.543 request: 00:05:27.543 { 00:05:27.543 "method": "framework_enable_cpumask_locks", 00:05:27.543 "req_id": 1 00:05:27.543 } 00:05:27.543 Got JSON-RPC error response 00:05:27.543 response: 00:05:27.543 { 00:05:27.543 "code": -32603, 00:05:27.543 "message": "Failed to claim CPU core: 2" 00:05:27.543 } 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61035 /var/tmp/spdk.sock 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61035 ']' 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.543 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.802 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.802 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.802 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61053 /var/tmp/spdk2.sock 00:05:27.802 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61053 ']' 00:05:27.802 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.802 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.802 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.802 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.802 19:43:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.060 ************************************ 00:05:28.060 END TEST locking_overlapped_coremask_via_rpc 00:05:28.060 ************************************ 00:05:28.060 19:43:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.060 19:43:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:28.060 19:43:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:28.060 19:43:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:28.060 19:43:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:28.060 19:43:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:28.060 00:05:28.060 real 0m2.593s 00:05:28.060 user 0m1.296s 00:05:28.060 sys 0m0.218s 00:05:28.061 19:43:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.061 19:43:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:28.061 19:43:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:28.061 19:43:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61035 ]] 00:05:28.061 19:43:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61035 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61035 ']' 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61035 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61035 00:05:28.061 killing process with pid 61035 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61035' 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61035 00:05:28.061 19:43:22 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61035 00:05:28.627 19:43:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61053 ]] 00:05:28.627 19:43:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61053 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61053 ']' 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61053 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61053 00:05:28.627 killing process with pid 61053 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61053' 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61053 00:05:28.627 19:43:22 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61053 00:05:29.202 19:43:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.202 Process with pid 61035 is not found 00:05:29.202 Process with pid 61053 is not found 00:05:29.202 19:43:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:29.202 19:43:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61035 ]] 00:05:29.202 19:43:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61035 00:05:29.202 19:43:23 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61035 ']' 00:05:29.202 19:43:23 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61035 00:05:29.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61035) - No such process 00:05:29.202 19:43:23 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61035 is not found' 00:05:29.202 19:43:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61053 ]] 00:05:29.202 19:43:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61053 00:05:29.202 19:43:23 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61053 ']' 00:05:29.202 19:43:23 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61053 00:05:29.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61053) - No such process 00:05:29.202 19:43:23 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61053 is not found' 00:05:29.202 19:43:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.202 ************************************ 00:05:29.202 END TEST cpu_locks 00:05:29.202 ************************************ 00:05:29.202 00:05:29.202 real 0m20.775s 00:05:29.202 user 0m36.040s 00:05:29.202 sys 0m5.533s 00:05:29.202 19:43:23 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.202 19:43:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.202 19:43:23 event -- common/autotest_common.sh@1142 -- # return 0 00:05:29.202 ************************************ 00:05:29.202 END TEST event 00:05:29.202 ************************************ 00:05:29.202 00:05:29.202 real 0m48.789s 00:05:29.202 user 1m33.284s 00:05:29.202 sys 0m9.415s 00:05:29.202 19:43:23 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.202 19:43:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.202 19:43:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.202 19:43:23 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:29.202 19:43:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.202 19:43:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.202 19:43:23 -- common/autotest_common.sh@10 -- # set +x 00:05:29.202 ************************************ 00:05:29.202 START TEST thread 00:05:29.202 ************************************ 00:05:29.202 19:43:23 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:29.202 * Looking for test storage... 00:05:29.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:29.202 19:43:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:29.202 19:43:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:29.202 19:43:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.202 19:43:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.202 ************************************ 00:05:29.202 START TEST thread_poller_perf 00:05:29.202 ************************************ 00:05:29.202 19:43:23 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:29.202 [2024-07-15 19:43:23.335890] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:29.202 [2024-07-15 19:43:23.335986] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61176 ] 00:05:29.460 [2024-07-15 19:43:23.467682] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.460 [2024-07-15 19:43:23.583055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.460 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:30.834 ====================================== 00:05:30.834 busy:2209766844 (cyc) 00:05:30.834 total_run_count: 354000 00:05:30.834 tsc_hz: 2200000000 (cyc) 00:05:30.834 ====================================== 00:05:30.834 poller_cost: 6242 (cyc), 2837 (nsec) 00:05:30.834 00:05:30.834 real 0m1.358s 00:05:30.834 user 0m1.200s 00:05:30.834 sys 0m0.052s 00:05:30.834 19:43:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.834 ************************************ 00:05:30.834 END TEST thread_poller_perf 00:05:30.834 ************************************ 00:05:30.834 19:43:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.834 19:43:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:30.834 19:43:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.834 19:43:24 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:30.834 19:43:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.834 19:43:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.834 ************************************ 00:05:30.834 START TEST thread_poller_perf 00:05:30.834 ************************************ 00:05:30.834 19:43:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.834 [2024-07-15 19:43:24.749981] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:30.834 [2024-07-15 19:43:24.750075] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61206 ] 00:05:30.834 [2024-07-15 19:43:24.884533] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.834 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:30.834 [2024-07-15 19:43:24.987248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.211 ====================================== 00:05:32.211 busy:2202003800 (cyc) 00:05:32.211 total_run_count: 4733000 00:05:32.211 tsc_hz: 2200000000 (cyc) 00:05:32.211 ====================================== 00:05:32.211 poller_cost: 465 (cyc), 211 (nsec) 00:05:32.211 ************************************ 00:05:32.211 END TEST thread_poller_perf 00:05:32.211 ************************************ 00:05:32.211 00:05:32.211 real 0m1.345s 00:05:32.211 user 0m1.183s 00:05:32.211 sys 0m0.055s 00:05:32.211 19:43:26 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.211 19:43:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.211 19:43:26 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:32.211 19:43:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:32.211 ************************************ 00:05:32.211 END TEST thread 00:05:32.211 ************************************ 00:05:32.211 00:05:32.211 real 0m2.887s 00:05:32.211 user 0m2.447s 00:05:32.211 sys 0m0.218s 00:05:32.211 19:43:26 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.211 19:43:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.211 19:43:26 -- common/autotest_common.sh@1142 -- # return 0 00:05:32.211 19:43:26 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:32.211 19:43:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.211 19:43:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.211 19:43:26 -- common/autotest_common.sh@10 -- # set +x 00:05:32.211 ************************************ 00:05:32.211 START TEST accel 00:05:32.211 ************************************ 00:05:32.211 19:43:26 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:32.211 * Looking for test storage... 00:05:32.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:32.211 19:43:26 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:32.211 19:43:26 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:32.211 19:43:26 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:32.211 19:43:26 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61280 00:05:32.211 19:43:26 accel -- accel/accel.sh@63 -- # waitforlisten 61280 00:05:32.211 19:43:26 accel -- common/autotest_common.sh@829 -- # '[' -z 61280 ']' 00:05:32.211 19:43:26 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:32.211 19:43:26 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.211 19:43:26 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.211 19:43:26 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.211 19:43:26 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.211 19:43:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.211 19:43:26 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:32.211 19:43:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.211 19:43:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.211 19:43:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.211 19:43:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.211 19:43:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.211 19:43:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:32.211 19:43:26 accel -- accel/accel.sh@41 -- # jq -r . 00:05:32.211 [2024-07-15 19:43:26.323465] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:32.211 [2024-07-15 19:43:26.323541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61280 ] 00:05:32.469 [2024-07-15 19:43:26.456429] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.469 [2024-07-15 19:43:26.573831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.469 [2024-07-15 19:43:26.627622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:33.036 19:43:27 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.036 19:43:27 accel -- common/autotest_common.sh@862 -- # return 0 00:05:33.036 19:43:27 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:33.036 19:43:27 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:33.036 19:43:27 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:33.036 19:43:27 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:33.036 19:43:27 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:33.036 19:43:27 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:33.036 19:43:27 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.036 19:43:27 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:33.036 19:43:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.036 19:43:27 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.294 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.294 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.294 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.294 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.294 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.294 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.294 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.294 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.294 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.294 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.294 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.294 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # IFS== 00:05:33.295 19:43:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:33.295 19:43:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:33.295 19:43:27 accel -- accel/accel.sh@75 -- # killprocess 61280 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@948 -- # '[' -z 61280 ']' 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@952 -- # kill -0 61280 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@953 -- # uname 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61280 00:05:33.295 killing process with pid 61280 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61280' 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@967 -- # kill 61280 00:05:33.295 19:43:27 accel -- common/autotest_common.sh@972 -- # wait 61280 00:05:33.553 19:43:27 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:33.553 19:43:27 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:33.553 19:43:27 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:33.553 19:43:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.553 19:43:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.553 19:43:27 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:33.553 19:43:27 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:33.553 19:43:27 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:33.553 19:43:27 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.553 19:43:27 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.553 19:43:27 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.553 19:43:27 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.553 19:43:27 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.553 19:43:27 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:33.553 19:43:27 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:33.553 19:43:27 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.553 19:43:27 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:33.553 19:43:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.553 19:43:27 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:33.553 19:43:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:33.553 19:43:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.553 19:43:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.812 ************************************ 00:05:33.812 START TEST accel_missing_filename 00:05:33.812 ************************************ 00:05:33.812 19:43:27 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:33.812 19:43:27 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:33.812 19:43:27 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:33.812 19:43:27 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.812 19:43:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.812 19:43:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.812 19:43:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.812 19:43:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:33.812 19:43:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:33.812 19:43:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:33.812 19:43:27 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.812 19:43:27 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.812 19:43:27 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.812 19:43:27 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.812 19:43:27 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.812 19:43:27 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:33.812 19:43:27 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:33.812 [2024-07-15 19:43:27.825971] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:33.812 [2024-07-15 19:43:27.826073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61332 ] 00:05:33.812 [2024-07-15 19:43:27.966621] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.071 [2024-07-15 19:43:28.075914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.071 [2024-07-15 19:43:28.130371] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.071 [2024-07-15 19:43:28.208640] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:34.071 A filename is required. 00:05:34.071 19:43:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:34.071 19:43:28 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.071 19:43:28 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:34.071 ************************************ 00:05:34.071 END TEST accel_missing_filename 00:05:34.071 ************************************ 00:05:34.071 19:43:28 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:34.071 19:43:28 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:34.071 19:43:28 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.071 00:05:34.071 real 0m0.494s 00:05:34.071 user 0m0.324s 00:05:34.071 sys 0m0.116s 00:05:34.071 19:43:28 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.071 19:43:28 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:34.332 19:43:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.332 19:43:28 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:34.332 19:43:28 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:34.332 19:43:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.332 19:43:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.332 ************************************ 00:05:34.332 START TEST accel_compress_verify 00:05:34.332 ************************************ 00:05:34.332 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:34.332 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:34.332 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:34.332 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:34.332 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.332 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:34.332 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.332 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:34.332 19:43:28 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:34.332 19:43:28 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:34.332 19:43:28 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.332 19:43:28 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.332 19:43:28 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.332 19:43:28 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.332 19:43:28 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.332 19:43:28 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:34.332 19:43:28 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:34.332 [2024-07-15 19:43:28.366496] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:34.332 [2024-07-15 19:43:28.366589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61356 ] 00:05:34.332 [2024-07-15 19:43:28.502805] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.591 [2024-07-15 19:43:28.630149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.591 [2024-07-15 19:43:28.687632] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.591 [2024-07-15 19:43:28.764008] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:34.850 00:05:34.850 Compression does not support the verify option, aborting. 00:05:34.850 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:34.850 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.850 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:34.850 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:34.850 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:34.850 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.850 00:05:34.850 real 0m0.510s 00:05:34.850 user 0m0.341s 00:05:34.850 sys 0m0.113s 00:05:34.850 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.850 ************************************ 00:05:34.850 END TEST accel_compress_verify 00:05:34.850 ************************************ 00:05:34.850 19:43:28 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:34.850 19:43:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.850 19:43:28 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:34.850 19:43:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.850 19:43:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.850 19:43:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.850 ************************************ 00:05:34.850 START TEST accel_wrong_workload 00:05:34.850 ************************************ 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:34.850 19:43:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:34.850 19:43:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:34.850 19:43:28 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.850 19:43:28 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.850 19:43:28 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.850 19:43:28 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.850 19:43:28 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.850 19:43:28 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:34.850 19:43:28 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:34.850 Unsupported workload type: foobar 00:05:34.850 [2024-07-15 19:43:28.925939] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:34.850 accel_perf options: 00:05:34.850 [-h help message] 00:05:34.850 [-q queue depth per core] 00:05:34.850 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:34.850 [-T number of threads per core 00:05:34.850 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:34.850 [-t time in seconds] 00:05:34.850 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:34.850 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:34.850 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:34.850 [-l for compress/decompress workloads, name of uncompressed input file 00:05:34.850 [-S for crc32c workload, use this seed value (default 0) 00:05:34.850 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:34.850 [-f for fill workload, use this BYTE value (default 255) 00:05:34.850 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:34.850 [-y verify result if this switch is on] 00:05:34.850 [-a tasks to allocate per core (default: same value as -q)] 00:05:34.850 Can be used to spread operations across a wider range of memory. 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.850 00:05:34.850 real 0m0.029s 00:05:34.850 user 0m0.017s 00:05:34.850 sys 0m0.012s 00:05:34.850 ************************************ 00:05:34.850 END TEST accel_wrong_workload 00:05:34.850 ************************************ 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.850 19:43:28 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:34.850 19:43:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.850 19:43:28 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:34.850 19:43:28 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:34.850 19:43:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.850 19:43:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.850 ************************************ 00:05:34.850 START TEST accel_negative_buffers 00:05:34.850 ************************************ 00:05:34.850 19:43:28 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:34.850 19:43:28 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:34.850 19:43:28 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:34.850 19:43:28 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:34.850 19:43:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.850 19:43:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:34.850 19:43:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.850 19:43:28 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:34.850 19:43:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:34.850 19:43:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:34.850 19:43:28 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.850 19:43:28 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.850 19:43:28 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.850 19:43:28 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.850 19:43:28 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.851 19:43:28 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:34.851 19:43:28 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:34.851 -x option must be non-negative. 00:05:34.851 [2024-07-15 19:43:28.998965] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:34.851 accel_perf options: 00:05:34.851 [-h help message] 00:05:34.851 [-q queue depth per core] 00:05:34.851 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:34.851 [-T number of threads per core 00:05:34.851 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:34.851 [-t time in seconds] 00:05:34.851 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:34.851 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:34.851 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:34.851 [-l for compress/decompress workloads, name of uncompressed input file 00:05:34.851 [-S for crc32c workload, use this seed value (default 0) 00:05:34.851 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:34.851 [-f for fill workload, use this BYTE value (default 255) 00:05:34.851 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:34.851 [-y verify result if this switch is on] 00:05:34.851 [-a tasks to allocate per core (default: same value as -q)] 00:05:34.851 Can be used to spread operations across a wider range of memory. 00:05:34.851 ************************************ 00:05:34.851 END TEST accel_negative_buffers 00:05:34.851 ************************************ 00:05:34.851 19:43:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:34.851 19:43:29 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.851 19:43:29 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.851 19:43:29 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.851 00:05:34.851 real 0m0.028s 00:05:34.851 user 0m0.013s 00:05:34.851 sys 0m0.013s 00:05:34.851 19:43:29 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.851 19:43:29 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:34.851 19:43:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.851 19:43:29 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:34.851 19:43:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:34.851 19:43:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.851 19:43:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.851 ************************************ 00:05:34.851 START TEST accel_crc32c 00:05:34.851 ************************************ 00:05:34.851 19:43:29 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:34.851 19:43:29 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:34.851 [2024-07-15 19:43:29.073102] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:34.851 [2024-07-15 19:43:29.073206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61419 ] 00:05:35.110 [2024-07-15 19:43:29.214215] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.110 [2024-07-15 19:43:29.331898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.369 19:43:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:36.746 19:43:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.746 ************************************ 00:05:36.746 END TEST accel_crc32c 00:05:36.746 ************************************ 00:05:36.746 00:05:36.746 real 0m1.515s 00:05:36.746 user 0m1.311s 00:05:36.746 sys 0m0.110s 00:05:36.746 19:43:30 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.746 19:43:30 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:36.746 19:43:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.746 19:43:30 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:36.746 19:43:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:36.746 19:43:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.746 19:43:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.746 ************************************ 00:05:36.746 START TEST accel_crc32c_C2 00:05:36.746 ************************************ 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:36.746 [2024-07-15 19:43:30.644787] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:36.746 [2024-07-15 19:43:30.644901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61455 ] 00:05:36.746 [2024-07-15 19:43:30.781434] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.746 [2024-07-15 19:43:30.898765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.746 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.747 19:43:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.122 00:05:38.122 real 0m1.504s 00:05:38.122 user 0m1.303s 00:05:38.122 sys 0m0.108s 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.122 ************************************ 00:05:38.122 END TEST accel_crc32c_C2 00:05:38.122 ************************************ 00:05:38.122 19:43:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 19:43:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.122 19:43:32 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:38.122 19:43:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:38.122 19:43:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.122 19:43:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 ************************************ 00:05:38.122 START TEST accel_copy 00:05:38.122 ************************************ 00:05:38.122 19:43:32 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:38.122 19:43:32 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:38.122 [2024-07-15 19:43:32.206021] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:38.122 [2024-07-15 19:43:32.206156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61488 ] 00:05:38.122 [2024-07-15 19:43:32.354345] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.380 [2024-07-15 19:43:32.451563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.380 19:43:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:39.796 19:43:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.796 00:05:39.796 real 0m1.490s 00:05:39.796 user 0m1.275s 00:05:39.796 sys 0m0.122s 00:05:39.796 19:43:33 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.796 ************************************ 00:05:39.797 END TEST accel_copy 00:05:39.797 ************************************ 00:05:39.797 19:43:33 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:39.797 19:43:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.797 19:43:33 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.797 19:43:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:39.797 19:43:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.797 19:43:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.797 ************************************ 00:05:39.797 START TEST accel_fill 00:05:39.797 ************************************ 00:05:39.797 19:43:33 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:39.797 19:43:33 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:39.797 [2024-07-15 19:43:33.746663] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:39.797 [2024-07-15 19:43:33.746762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61524 ] 00:05:39.797 [2024-07-15 19:43:33.886005] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.797 [2024-07-15 19:43:33.995379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.055 19:43:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.000 19:43:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:41.001 19:43:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.001 00:05:41.001 real 0m1.495s 00:05:41.001 user 0m1.283s 00:05:41.001 sys 0m0.118s 00:05:41.001 19:43:35 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.001 19:43:35 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:41.001 ************************************ 00:05:41.001 END TEST accel_fill 00:05:41.001 ************************************ 00:05:41.262 19:43:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.262 19:43:35 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:41.262 19:43:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:41.262 19:43:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.262 19:43:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.262 ************************************ 00:05:41.262 START TEST accel_copy_crc32c 00:05:41.262 ************************************ 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:41.262 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:41.262 [2024-07-15 19:43:35.283797] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:41.262 [2024-07-15 19:43:35.284422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61553 ] 00:05:41.262 [2024-07-15 19:43:35.414057] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.521 [2024-07-15 19:43:35.522602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.521 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.522 19:43:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.900 00:05:42.900 real 0m1.486s 00:05:42.900 user 0m1.280s 00:05:42.900 sys 0m0.110s 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.900 ************************************ 00:05:42.900 END TEST accel_copy_crc32c 00:05:42.900 ************************************ 00:05:42.900 19:43:36 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:42.900 19:43:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.900 19:43:36 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:42.900 19:43:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:42.900 19:43:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.900 19:43:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.900 ************************************ 00:05:42.900 START TEST accel_copy_crc32c_C2 00:05:42.900 ************************************ 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.900 19:43:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:42.900 [2024-07-15 19:43:36.819318] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:42.900 [2024-07-15 19:43:36.819407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:05:42.900 [2024-07-15 19:43:36.957825] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.900 [2024-07-15 19:43:37.054329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.900 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.901 19:43:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.277 ************************************ 00:05:44.277 END TEST accel_copy_crc32c_C2 00:05:44.277 ************************************ 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.277 00:05:44.277 real 0m1.490s 00:05:44.277 user 0m1.276s 00:05:44.277 sys 0m0.119s 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.277 19:43:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:44.277 19:43:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.277 19:43:38 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:44.277 19:43:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.277 19:43:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.277 19:43:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.277 ************************************ 00:05:44.277 START TEST accel_dualcast 00:05:44.277 ************************************ 00:05:44.277 19:43:38 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:44.277 19:43:38 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:44.277 [2024-07-15 19:43:38.362468] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:44.277 [2024-07-15 19:43:38.362573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61622 ] 00:05:44.277 [2024-07-15 19:43:38.501769] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.537 [2024-07-15 19:43:38.622824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.537 19:43:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.913 19:43:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.913 19:43:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.913 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.913 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.914 ************************************ 00:05:45.914 END TEST accel_dualcast 00:05:45.914 ************************************ 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:45.914 19:43:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.914 00:05:45.914 real 0m1.517s 00:05:45.914 user 0m1.303s 00:05:45.914 sys 0m0.121s 00:05:45.914 19:43:39 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.914 19:43:39 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:45.914 19:43:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.914 19:43:39 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:45.914 19:43:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:45.914 19:43:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.914 19:43:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.914 ************************************ 00:05:45.914 START TEST accel_compare 00:05:45.914 ************************************ 00:05:45.914 19:43:39 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:45.914 19:43:39 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:45.914 [2024-07-15 19:43:39.928947] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:45.914 [2024-07-15 19:43:39.929121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61662 ] 00:05:45.914 [2024-07-15 19:43:40.069600] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.172 [2024-07-15 19:43:40.191169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.172 19:43:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:47.546 19:43:41 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.546 00:05:47.546 real 0m1.511s 00:05:47.546 user 0m1.289s 00:05:47.546 sys 0m0.126s 00:05:47.546 ************************************ 00:05:47.546 END TEST accel_compare 00:05:47.546 ************************************ 00:05:47.546 19:43:41 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.546 19:43:41 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:47.546 19:43:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.546 19:43:41 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:47.546 19:43:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:47.546 19:43:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.546 19:43:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.546 ************************************ 00:05:47.546 START TEST accel_xor 00:05:47.546 ************************************ 00:05:47.546 19:43:41 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:47.546 19:43:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:47.546 [2024-07-15 19:43:41.489304] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:47.546 [2024-07-15 19:43:41.489413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61691 ] 00:05:47.546 [2024-07-15 19:43:41.622816] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.546 [2024-07-15 19:43:41.738011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.805 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.806 19:43:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:48.742 19:43:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.742 00:05:48.742 real 0m1.498s 00:05:48.742 user 0m1.284s 00:05:48.742 sys 0m0.119s 00:05:48.742 19:43:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.742 19:43:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:48.742 ************************************ 00:05:48.742 END TEST accel_xor 00:05:48.742 ************************************ 00:05:49.013 19:43:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.013 19:43:43 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:49.013 19:43:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:49.013 19:43:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.013 19:43:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.013 ************************************ 00:05:49.013 START TEST accel_xor 00:05:49.013 ************************************ 00:05:49.013 19:43:43 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:49.013 19:43:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:49.013 [2024-07-15 19:43:43.041810] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:49.013 [2024-07-15 19:43:43.041899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61733 ] 00:05:49.013 [2024-07-15 19:43:43.178727] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.297 [2024-07-15 19:43:43.301324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.297 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.298 19:43:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.673 ************************************ 00:05:50.673 END TEST accel_xor 00:05:50.673 ************************************ 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:50.673 19:43:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.673 00:05:50.673 real 0m1.532s 00:05:50.673 user 0m1.319s 00:05:50.673 sys 0m0.114s 00:05:50.673 19:43:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.673 19:43:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:50.673 19:43:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.673 19:43:44 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:50.673 19:43:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:50.673 19:43:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.673 19:43:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.673 ************************************ 00:05:50.673 START TEST accel_dif_verify 00:05:50.673 ************************************ 00:05:50.673 19:43:44 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:50.673 19:43:44 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:50.673 [2024-07-15 19:43:44.628121] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:50.673 [2024-07-15 19:43:44.628545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61762 ] 00:05:50.673 [2024-07-15 19:43:44.765751] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.673 [2024-07-15 19:43:44.885154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.933 19:43:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 ************************************ 00:05:52.310 END TEST accel_dif_verify 00:05:52.310 ************************************ 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.310 19:43:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:52.311 19:43:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.311 00:05:52.311 real 0m1.522s 00:05:52.311 user 0m1.300s 00:05:52.311 sys 0m0.123s 00:05:52.311 19:43:46 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.311 19:43:46 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:52.311 19:43:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.311 19:43:46 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:52.311 19:43:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:52.311 19:43:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.311 19:43:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.311 ************************************ 00:05:52.311 START TEST accel_dif_generate 00:05:52.311 ************************************ 00:05:52.311 19:43:46 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:52.311 [2024-07-15 19:43:46.199814] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:52.311 [2024-07-15 19:43:46.199900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61802 ] 00:05:52.311 [2024-07-15 19:43:46.338126] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.311 [2024-07-15 19:43:46.445789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.311 19:43:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.689 ************************************ 00:05:53.689 END TEST accel_dif_generate 00:05:53.689 ************************************ 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:53.689 19:43:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.689 00:05:53.689 real 0m1.499s 00:05:53.689 user 0m1.287s 00:05:53.689 sys 0m0.118s 00:05:53.689 19:43:47 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.689 19:43:47 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:53.689 19:43:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.689 19:43:47 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:53.689 19:43:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:53.689 19:43:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.689 19:43:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.689 ************************************ 00:05:53.689 START TEST accel_dif_generate_copy 00:05:53.689 ************************************ 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:53.689 19:43:47 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:53.689 [2024-07-15 19:43:47.751475] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:53.689 [2024-07-15 19:43:47.751607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61831 ] 00:05:53.689 [2024-07-15 19:43:47.893593] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.948 [2024-07-15 19:43:48.007413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:53.948 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.949 19:43:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.338 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.339 00:05:55.339 real 0m1.509s 00:05:55.339 user 0m1.293s 00:05:55.339 sys 0m0.117s 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.339 19:43:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:55.339 ************************************ 00:05:55.339 END TEST accel_dif_generate_copy 00:05:55.339 ************************************ 00:05:55.339 19:43:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.339 19:43:49 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:55.339 19:43:49 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.339 19:43:49 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:55.339 19:43:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.339 19:43:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.339 ************************************ 00:05:55.339 START TEST accel_comp 00:05:55.339 ************************************ 00:05:55.339 19:43:49 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:55.339 19:43:49 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:55.339 [2024-07-15 19:43:49.316275] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:55.339 [2024-07-15 19:43:49.316381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61871 ] 00:05:55.339 [2024-07-15 19:43:49.449705] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.339 [2024-07-15 19:43:49.555531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.598 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.599 19:43:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.535 19:43:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.535 19:43:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.535 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.535 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.535 19:43:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.535 19:43:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.535 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.535 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.535 19:43:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:56.794 19:43:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.794 00:05:56.794 real 0m1.494s 00:05:56.794 user 0m1.280s 00:05:56.794 sys 0m0.116s 00:05:56.794 ************************************ 00:05:56.794 END TEST accel_comp 00:05:56.794 ************************************ 00:05:56.794 19:43:50 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.794 19:43:50 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:56.794 19:43:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.794 19:43:50 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.794 19:43:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:56.794 19:43:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.795 19:43:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.795 ************************************ 00:05:56.795 START TEST accel_decomp 00:05:56.795 ************************************ 00:05:56.795 19:43:50 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:56.795 19:43:50 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:56.795 [2024-07-15 19:43:50.862440] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:56.795 [2024-07-15 19:43:50.862517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61900 ] 00:05:56.795 [2024-07-15 19:43:50.993953] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.054 [2024-07-15 19:43:51.112239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.054 19:43:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.434 ************************************ 00:05:58.434 END TEST accel_decomp 00:05:58.434 ************************************ 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.434 19:43:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.434 00:05:58.434 real 0m1.503s 00:05:58.434 user 0m1.289s 00:05:58.434 sys 0m0.119s 00:05:58.434 19:43:52 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.434 19:43:52 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:58.434 19:43:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.434 19:43:52 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:58.434 19:43:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:58.434 19:43:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.434 19:43:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.435 ************************************ 00:05:58.435 START TEST accel_decomp_full 00:05:58.435 ************************************ 00:05:58.435 19:43:52 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:58.435 19:43:52 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:58.435 [2024-07-15 19:43:52.425577] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:58.435 [2024-07-15 19:43:52.426332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61943 ] 00:05:58.435 [2024-07-15 19:43:52.566715] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.694 [2024-07-15 19:43:52.682094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.695 19:43:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:00.096 ************************************ 00:06:00.096 END TEST accel_decomp_full 00:06:00.096 ************************************ 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.096 19:43:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.096 00:06:00.096 real 0m1.528s 00:06:00.096 user 0m1.312s 00:06:00.096 sys 0m0.116s 00:06:00.096 19:43:53 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.096 19:43:53 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:00.096 19:43:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.096 19:43:53 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:00.096 19:43:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:00.096 19:43:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.096 19:43:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.096 ************************************ 00:06:00.096 START TEST accel_decomp_mcore 00:06:00.096 ************************************ 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:00.096 19:43:53 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:00.096 [2024-07-15 19:43:54.002467] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:00.096 [2024-07-15 19:43:54.002582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61978 ] 00:06:00.096 [2024-07-15 19:43:54.139617] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.096 [2024-07-15 19:43:54.261043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.096 [2024-07-15 19:43:54.261130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.096 [2024-07-15 19:43:54.261456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.096 [2024-07-15 19:43:54.261464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.096 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.355 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.355 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.355 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.355 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.355 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.355 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.355 19:43:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.291 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.292 00:06:01.292 real 0m1.522s 00:06:01.292 user 0m4.682s 00:06:01.292 sys 0m0.128s 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.292 ************************************ 00:06:01.292 END TEST accel_decomp_mcore 00:06:01.292 ************************************ 00:06:01.292 19:43:55 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:01.551 19:43:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.551 19:43:55 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.551 19:43:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:01.551 19:43:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.551 19:43:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.551 ************************************ 00:06:01.551 START TEST accel_decomp_full_mcore 00:06:01.551 ************************************ 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:01.551 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:01.551 [2024-07-15 19:43:55.571064] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:01.551 [2024-07-15 19:43:55.571156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62015 ] 00:06:01.551 [2024-07-15 19:43:55.705157] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.810 [2024-07-15 19:43:55.813367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.810 [2024-07-15 19:43:55.813514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.810 [2024-07-15 19:43:55.813612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.810 [2024-07-15 19:43:55.814196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.810 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.810 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.811 19:43:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.187 ************************************ 00:06:03.187 END TEST accel_decomp_full_mcore 00:06:03.187 ************************************ 00:06:03.187 00:06:03.187 real 0m1.518s 00:06:03.187 user 0m4.711s 00:06:03.187 sys 0m0.137s 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.187 19:43:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:03.187 19:43:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.187 19:43:57 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:03.187 19:43:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:03.187 19:43:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.187 19:43:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.187 ************************************ 00:06:03.187 START TEST accel_decomp_mthread 00:06:03.187 ************************************ 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:03.187 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:03.187 [2024-07-15 19:43:57.131561] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:03.187 [2024-07-15 19:43:57.131648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62053 ] 00:06:03.187 [2024-07-15 19:43:57.266538] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.187 [2024-07-15 19:43:57.380485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.446 19:43:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.380 00:06:04.380 real 0m1.503s 00:06:04.380 user 0m1.296s 00:06:04.380 sys 0m0.117s 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.380 ************************************ 00:06:04.380 END TEST accel_decomp_mthread 00:06:04.380 ************************************ 00:06:04.380 19:43:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:04.639 19:43:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.639 19:43:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.639 19:43:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:04.639 19:43:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.639 19:43:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.639 ************************************ 00:06:04.639 START TEST accel_decomp_full_mthread 00:06:04.639 ************************************ 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:04.639 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:04.639 [2024-07-15 19:43:58.686814] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:04.639 [2024-07-15 19:43:58.686904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62087 ] 00:06:04.639 [2024-07-15 19:43:58.816745] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.897 [2024-07-15 19:43:58.926221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.897 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.898 19:43:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.898 19:43:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.898 19:43:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.898 19:43:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.898 19:43:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.272 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.273 ************************************ 00:06:06.273 END TEST accel_decomp_full_mthread 00:06:06.273 ************************************ 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.273 00:06:06.273 real 0m1.517s 00:06:06.273 user 0m1.312s 00:06:06.273 sys 0m0.113s 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.273 19:44:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:06.273 19:44:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.273 19:44:00 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:06.273 19:44:00 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:06.273 19:44:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:06.273 19:44:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.273 19:44:00 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:06.273 19:44:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.273 19:44:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.273 19:44:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.273 19:44:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.273 19:44:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.273 19:44:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:06.273 19:44:00 accel -- accel/accel.sh@41 -- # jq -r . 00:06:06.273 19:44:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.273 ************************************ 00:06:06.273 START TEST accel_dif_functional_tests 00:06:06.273 ************************************ 00:06:06.273 19:44:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:06.273 [2024-07-15 19:44:00.289740] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:06.273 [2024-07-15 19:44:00.289831] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62123 ] 00:06:06.273 [2024-07-15 19:44:00.430960] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.532 [2024-07-15 19:44:00.546647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.532 [2024-07-15 19:44:00.546800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.532 [2024-07-15 19:44:00.546804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.532 [2024-07-15 19:44:00.601434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.532 00:06:06.532 00:06:06.532 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.532 http://cunit.sourceforge.net/ 00:06:06.532 00:06:06.532 00:06:06.532 Suite: accel_dif 00:06:06.532 Test: verify: DIF generated, GUARD check ...passed 00:06:06.532 Test: verify: DIF generated, APPTAG check ...passed 00:06:06.532 Test: verify: DIF generated, REFTAG check ...passed 00:06:06.532 Test: verify: DIF not generated, GUARD check ...[2024-07-15 19:44:00.639368] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:06.532 passed 00:06:06.532 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 19:44:00.639450] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:06.532 passed 00:06:06.532 Test: verify: DIF not generated, REFTAG check ...passed 00:06:06.532 Test: verify: APPTAG correct, APPTAG check ...[2024-07-15 19:44:00.639648] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:06.532 passed 00:06:06.532 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 19:44:00.639819] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:06.532 passed 00:06:06.532 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:06.532 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:06.532 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:06.532 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:06.532 Test: verify copy: DIF generated, GUARD check ...passed 00:06:06.532 Test: verify copy: DIF generated, APPTAG check ...[2024-07-15 19:44:00.640042] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:06.532 passed 00:06:06.532 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:06.532 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:06.532 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 19:44:00.640396] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:06.532 passed 00:06:06.532 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 19:44:00.640472] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:06.532 [2024-07-15 19:44:00.640557] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:06.532 passed 00:06:06.532 Test: generate copy: DIF generated, GUARD check ...passed 00:06:06.532 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:06.532 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:06.532 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:06.532 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:06.532 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:06.532 Test: generate copy: iovecs-len validate ...passed 00:06:06.532 Test: generate copy: buffer alignment validate ...passed 00:06:06.532 00:06:06.532 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.532 suites 1 1 n/a 0 0 00:06:06.532 tests 26 26 26 0 0 00:06:06.532 asserts 115 115 115 0 n/a 00:06:06.532 00:06:06.532 Elapsed time = 0.005 seconds 00:06:06.532 [2024-07-15 19:44:00.640954] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:06.790 00:06:06.790 real 0m0.625s 00:06:06.790 user 0m0.833s 00:06:06.790 sys 0m0.152s 00:06:06.790 19:44:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.790 ************************************ 00:06:06.790 END TEST accel_dif_functional_tests 00:06:06.790 ************************************ 00:06:06.790 19:44:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:06.790 19:44:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.790 00:06:06.790 real 0m34.733s 00:06:06.790 user 0m36.321s 00:06:06.790 sys 0m4.004s 00:06:06.791 ************************************ 00:06:06.791 END TEST accel 00:06:06.791 ************************************ 00:06:06.791 19:44:00 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.791 19:44:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.791 19:44:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.791 19:44:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:06.791 19:44:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.791 19:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.791 19:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:06.791 ************************************ 00:06:06.791 START TEST accel_rpc 00:06:06.791 ************************************ 00:06:06.791 19:44:00 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:06.791 * Looking for test storage... 00:06:06.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:06.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.791 19:44:01 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:06.791 19:44:01 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62187 00:06:06.791 19:44:01 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62187 00:06:06.791 19:44:01 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62187 ']' 00:06:06.791 19:44:01 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:06.791 19:44:01 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.791 19:44:01 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.791 19:44:01 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.791 19:44:01 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.791 19:44:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.057 [2024-07-15 19:44:01.085736] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:07.057 [2024-07-15 19:44:01.085814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62187 ] 00:06:07.057 [2024-07-15 19:44:01.220669] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.339 [2024-07-15 19:44:01.340244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.909 19:44:02 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.909 19:44:02 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:07.909 19:44:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:07.909 19:44:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:07.909 19:44:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:07.909 19:44:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:07.909 19:44:02 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:07.909 19:44:02 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.909 19:44:02 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.909 19:44:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.909 ************************************ 00:06:07.909 START TEST accel_assign_opcode 00:06:07.909 ************************************ 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:07.909 [2024-07-15 19:44:02.104914] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:07.909 [2024-07-15 19:44:02.112899] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.909 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.168 [2024-07-15 19:44:02.174447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.168 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.168 19:44:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:08.168 19:44:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:08.168 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.168 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.168 19:44:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:08.168 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.168 software 00:06:08.168 00:06:08.168 real 0m0.299s 00:06:08.168 user 0m0.056s 00:06:08.168 sys 0m0.009s 00:06:08.168 ************************************ 00:06:08.168 END TEST accel_assign_opcode 00:06:08.168 ************************************ 00:06:08.168 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.168 19:44:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:08.426 19:44:02 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62187 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62187 ']' 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62187 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62187 00:06:08.426 killing process with pid 62187 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62187' 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@967 -- # kill 62187 00:06:08.426 19:44:02 accel_rpc -- common/autotest_common.sh@972 -- # wait 62187 00:06:08.684 ************************************ 00:06:08.684 END TEST accel_rpc 00:06:08.684 ************************************ 00:06:08.684 00:06:08.684 real 0m1.917s 00:06:08.684 user 0m2.074s 00:06:08.684 sys 0m0.418s 00:06:08.684 19:44:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.684 19:44:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.684 19:44:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.684 19:44:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:08.684 19:44:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.684 19:44:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.684 19:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:08.684 ************************************ 00:06:08.684 START TEST app_cmdline 00:06:08.684 ************************************ 00:06:08.684 19:44:02 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:08.942 * Looking for test storage... 00:06:08.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:08.942 19:44:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:08.942 19:44:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62280 00:06:08.942 19:44:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:08.942 19:44:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62280 00:06:08.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.942 19:44:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62280 ']' 00:06:08.942 19:44:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.942 19:44:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.942 19:44:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.942 19:44:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.942 19:44:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.942 [2024-07-15 19:44:03.060982] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:08.942 [2024-07-15 19:44:03.061421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62280 ] 00:06:09.201 [2024-07-15 19:44:03.201896] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.201 [2024-07-15 19:44:03.319037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.201 [2024-07-15 19:44:03.376343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.136 19:44:04 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.136 19:44:04 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:10.136 19:44:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:10.136 { 00:06:10.136 "version": "SPDK v24.09-pre git sha1 91f51bb85", 00:06:10.136 "fields": { 00:06:10.136 "major": 24, 00:06:10.136 "minor": 9, 00:06:10.136 "patch": 0, 00:06:10.136 "suffix": "-pre", 00:06:10.136 "commit": "91f51bb85" 00:06:10.136 } 00:06:10.136 } 00:06:10.136 19:44:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:10.136 19:44:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:10.136 19:44:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:10.136 19:44:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:10.136 19:44:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:10.136 19:44:04 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.136 19:44:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.136 19:44:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:10.136 19:44:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:10.136 19:44:04 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.393 19:44:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:10.393 19:44:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:10.393 19:44:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:10.393 19:44:04 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.652 request: 00:06:10.652 { 00:06:10.652 "method": "env_dpdk_get_mem_stats", 00:06:10.652 "req_id": 1 00:06:10.652 } 00:06:10.652 Got JSON-RPC error response 00:06:10.652 response: 00:06:10.652 { 00:06:10.652 "code": -32601, 00:06:10.652 "message": "Method not found" 00:06:10.652 } 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.652 19:44:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62280 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62280 ']' 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62280 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62280 00:06:10.652 killing process with pid 62280 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62280' 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@967 -- # kill 62280 00:06:10.652 19:44:04 app_cmdline -- common/autotest_common.sh@972 -- # wait 62280 00:06:10.913 00:06:10.913 real 0m2.148s 00:06:10.913 user 0m2.728s 00:06:10.913 sys 0m0.460s 00:06:10.913 19:44:05 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.913 ************************************ 00:06:10.913 END TEST app_cmdline 00:06:10.913 ************************************ 00:06:10.913 19:44:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.913 19:44:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.913 19:44:05 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:10.913 19:44:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.913 19:44:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.913 19:44:05 -- common/autotest_common.sh@10 -- # set +x 00:06:10.913 ************************************ 00:06:10.913 START TEST version 00:06:10.913 ************************************ 00:06:10.913 19:44:05 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:11.172 * Looking for test storage... 00:06:11.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:11.172 19:44:05 version -- app/version.sh@17 -- # get_header_version major 00:06:11.172 19:44:05 version -- app/version.sh@14 -- # cut -f2 00:06:11.172 19:44:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.172 19:44:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.172 19:44:05 version -- app/version.sh@17 -- # major=24 00:06:11.172 19:44:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:11.172 19:44:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.172 19:44:05 version -- app/version.sh@14 -- # cut -f2 00:06:11.172 19:44:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.172 19:44:05 version -- app/version.sh@18 -- # minor=9 00:06:11.172 19:44:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:11.172 19:44:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.172 19:44:05 version -- app/version.sh@14 -- # cut -f2 00:06:11.172 19:44:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.172 19:44:05 version -- app/version.sh@19 -- # patch=0 00:06:11.172 19:44:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:11.172 19:44:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.172 19:44:05 version -- app/version.sh@14 -- # cut -f2 00:06:11.172 19:44:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.172 19:44:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:11.172 19:44:05 version -- app/version.sh@22 -- # version=24.9 00:06:11.172 19:44:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:11.172 19:44:05 version -- app/version.sh@28 -- # version=24.9rc0 00:06:11.172 19:44:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:11.172 19:44:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:11.172 19:44:05 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:11.172 19:44:05 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:11.172 ************************************ 00:06:11.172 END TEST version 00:06:11.172 ************************************ 00:06:11.172 00:06:11.172 real 0m0.150s 00:06:11.172 user 0m0.083s 00:06:11.172 sys 0m0.097s 00:06:11.172 19:44:05 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.172 19:44:05 version -- common/autotest_common.sh@10 -- # set +x 00:06:11.172 19:44:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.172 19:44:05 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:11.172 19:44:05 -- spdk/autotest.sh@198 -- # uname -s 00:06:11.172 19:44:05 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:11.172 19:44:05 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:11.172 19:44:05 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:11.172 19:44:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:11.172 19:44:05 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:11.172 19:44:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.172 19:44:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.172 19:44:05 -- common/autotest_common.sh@10 -- # set +x 00:06:11.172 ************************************ 00:06:11.172 START TEST spdk_dd 00:06:11.172 ************************************ 00:06:11.172 19:44:05 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:11.172 * Looking for test storage... 00:06:11.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:11.172 19:44:05 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.172 19:44:05 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.173 19:44:05 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.173 19:44:05 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.173 19:44:05 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.173 19:44:05 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.173 19:44:05 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.173 19:44:05 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:11.173 19:44:05 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.173 19:44:05 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:11.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:11.741 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:11.741 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:11.741 19:44:05 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:11.741 19:44:05 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:11.741 19:44:05 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:11.742 19:44:05 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:11.742 19:44:05 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.742 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:11.743 * spdk_dd linked to liburing 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:11.743 19:44:05 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:11.743 19:44:05 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:11.744 19:44:05 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:11.744 19:44:05 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:11.744 19:44:05 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:11.744 19:44:05 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:11.744 19:44:05 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:11.744 19:44:05 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:11.744 19:44:05 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:11.744 19:44:05 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:11.744 19:44:05 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:11.744 19:44:05 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:11.744 19:44:05 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:11.744 19:44:05 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:11.744 19:44:05 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:11.744 19:44:05 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:11.744 19:44:05 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:11.744 19:44:05 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:11.744 19:44:05 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:11.744 19:44:05 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.744 19:44:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:11.744 ************************************ 00:06:11.744 START TEST spdk_dd_basic_rw 00:06:11.744 ************************************ 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:11.744 * Looking for test storage... 00:06:11.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:11.744 19:44:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:12.005 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:12.005 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.006 ************************************ 00:06:12.006 START TEST dd_bs_lt_native_bs 00:06:12.006 ************************************ 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:12.006 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.006 { 00:06:12.006 "subsystems": [ 00:06:12.006 { 00:06:12.006 "subsystem": "bdev", 00:06:12.006 "config": [ 00:06:12.006 { 00:06:12.006 "params": { 00:06:12.006 "trtype": "pcie", 00:06:12.006 "traddr": "0000:00:10.0", 00:06:12.006 "name": "Nvme0" 00:06:12.006 }, 00:06:12.006 "method": "bdev_nvme_attach_controller" 00:06:12.006 }, 00:06:12.006 { 00:06:12.006 "method": "bdev_wait_for_examine" 00:06:12.006 } 00:06:12.006 ] 00:06:12.006 } 00:06:12.006 ] 00:06:12.006 } 00:06:12.006 [2024-07-15 19:44:06.225801] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:12.006 [2024-07-15 19:44:06.225886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62601 ] 00:06:12.265 [2024-07-15 19:44:06.367934] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.265 [2024-07-15 19:44:06.497672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.524 [2024-07-15 19:44:06.561102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.524 [2024-07-15 19:44:06.671277] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:12.524 [2024-07-15 19:44:06.671359] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:12.783 [2024-07-15 19:44:06.798499] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.783 00:06:12.783 real 0m0.731s 00:06:12.783 user 0m0.512s 00:06:12.783 sys 0m0.176s 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.783 ************************************ 00:06:12.783 END TEST dd_bs_lt_native_bs 00:06:12.783 ************************************ 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.783 ************************************ 00:06:12.783 START TEST dd_rw 00:06:12.783 ************************************ 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:12.783 19:44:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.350 19:44:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:13.350 19:44:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:13.350 19:44:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.350 19:44:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.609 [2024-07-15 19:44:07.619210] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:13.609 [2024-07-15 19:44:07.619358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62637 ] 00:06:13.609 { 00:06:13.609 "subsystems": [ 00:06:13.609 { 00:06:13.609 "subsystem": "bdev", 00:06:13.609 "config": [ 00:06:13.609 { 00:06:13.609 "params": { 00:06:13.609 "trtype": "pcie", 00:06:13.609 "traddr": "0000:00:10.0", 00:06:13.609 "name": "Nvme0" 00:06:13.609 }, 00:06:13.609 "method": "bdev_nvme_attach_controller" 00:06:13.609 }, 00:06:13.609 { 00:06:13.609 "method": "bdev_wait_for_examine" 00:06:13.609 } 00:06:13.609 ] 00:06:13.609 } 00:06:13.609 ] 00:06:13.609 } 00:06:13.609 [2024-07-15 19:44:07.759066] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.867 [2024-07-15 19:44:07.860917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.867 [2024-07-15 19:44:07.915828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.126  Copying: 60/60 [kB] (average 19 MBps) 00:06:14.126 00:06:14.126 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:14.126 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:14.126 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.126 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.126 { 00:06:14.126 "subsystems": [ 00:06:14.126 { 00:06:14.126 "subsystem": "bdev", 00:06:14.126 "config": [ 00:06:14.126 { 00:06:14.126 "params": { 00:06:14.126 "trtype": "pcie", 00:06:14.126 "traddr": "0000:00:10.0", 00:06:14.126 "name": "Nvme0" 00:06:14.126 }, 00:06:14.126 "method": "bdev_nvme_attach_controller" 00:06:14.126 }, 00:06:14.126 { 00:06:14.126 "method": "bdev_wait_for_examine" 00:06:14.126 } 00:06:14.126 ] 00:06:14.126 } 00:06:14.126 ] 00:06:14.126 } 00:06:14.126 [2024-07-15 19:44:08.301419] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:14.126 [2024-07-15 19:44:08.301508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62656 ] 00:06:14.385 [2024-07-15 19:44:08.437979] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.385 [2024-07-15 19:44:08.553244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.385 [2024-07-15 19:44:08.606683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.907  Copying: 60/60 [kB] (average 19 MBps) 00:06:14.907 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.907 19:44:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.907 [2024-07-15 19:44:08.998414] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:14.908 [2024-07-15 19:44:08.998508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62666 ] 00:06:14.908 { 00:06:14.908 "subsystems": [ 00:06:14.908 { 00:06:14.908 "subsystem": "bdev", 00:06:14.908 "config": [ 00:06:14.908 { 00:06:14.908 "params": { 00:06:14.908 "trtype": "pcie", 00:06:14.908 "traddr": "0000:00:10.0", 00:06:14.908 "name": "Nvme0" 00:06:14.908 }, 00:06:14.908 "method": "bdev_nvme_attach_controller" 00:06:14.908 }, 00:06:14.908 { 00:06:14.908 "method": "bdev_wait_for_examine" 00:06:14.908 } 00:06:14.908 ] 00:06:14.908 } 00:06:14.908 ] 00:06:14.908 } 00:06:14.908 [2024-07-15 19:44:09.131731] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.166 [2024-07-15 19:44:09.231458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.166 [2024-07-15 19:44:09.288903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.425  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:15.425 00:06:15.425 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:15.425 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:15.425 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:15.425 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:15.425 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:15.425 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:15.425 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.993 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:15.993 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:15.993 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.993 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.251 [2024-07-15 19:44:10.237830] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:16.251 [2024-07-15 19:44:10.237939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62690 ] 00:06:16.251 { 00:06:16.251 "subsystems": [ 00:06:16.251 { 00:06:16.251 "subsystem": "bdev", 00:06:16.251 "config": [ 00:06:16.251 { 00:06:16.251 "params": { 00:06:16.251 "trtype": "pcie", 00:06:16.251 "traddr": "0000:00:10.0", 00:06:16.251 "name": "Nvme0" 00:06:16.251 }, 00:06:16.251 "method": "bdev_nvme_attach_controller" 00:06:16.251 }, 00:06:16.251 { 00:06:16.251 "method": "bdev_wait_for_examine" 00:06:16.251 } 00:06:16.251 ] 00:06:16.251 } 00:06:16.252 ] 00:06:16.252 } 00:06:16.252 [2024-07-15 19:44:10.378486] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.511 [2024-07-15 19:44:10.498037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.511 [2024-07-15 19:44:10.554076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.771  Copying: 60/60 [kB] (average 58 MBps) 00:06:16.771 00:06:16.771 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:16.771 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:16.771 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.771 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.771 [2024-07-15 19:44:10.953961] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:16.771 [2024-07-15 19:44:10.954094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62704 ] 00:06:16.771 { 00:06:16.771 "subsystems": [ 00:06:16.771 { 00:06:16.771 "subsystem": "bdev", 00:06:16.771 "config": [ 00:06:16.771 { 00:06:16.771 "params": { 00:06:16.771 "trtype": "pcie", 00:06:16.771 "traddr": "0000:00:10.0", 00:06:16.771 "name": "Nvme0" 00:06:16.771 }, 00:06:16.771 "method": "bdev_nvme_attach_controller" 00:06:16.771 }, 00:06:16.771 { 00:06:16.771 "method": "bdev_wait_for_examine" 00:06:16.771 } 00:06:16.771 ] 00:06:16.771 } 00:06:16.771 ] 00:06:16.771 } 00:06:17.030 [2024-07-15 19:44:11.093361] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.030 [2024-07-15 19:44:11.213871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.030 [2024-07-15 19:44:11.269641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.549  Copying: 60/60 [kB] (average 58 MBps) 00:06:17.549 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.549 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.549 [2024-07-15 19:44:11.673758] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:17.549 [2024-07-15 19:44:11.673889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62725 ] 00:06:17.549 { 00:06:17.549 "subsystems": [ 00:06:17.549 { 00:06:17.549 "subsystem": "bdev", 00:06:17.549 "config": [ 00:06:17.549 { 00:06:17.549 "params": { 00:06:17.549 "trtype": "pcie", 00:06:17.549 "traddr": "0000:00:10.0", 00:06:17.549 "name": "Nvme0" 00:06:17.549 }, 00:06:17.549 "method": "bdev_nvme_attach_controller" 00:06:17.549 }, 00:06:17.549 { 00:06:17.549 "method": "bdev_wait_for_examine" 00:06:17.549 } 00:06:17.549 ] 00:06:17.549 } 00:06:17.549 ] 00:06:17.549 } 00:06:17.809 [2024-07-15 19:44:11.813659] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.809 [2024-07-15 19:44:11.923049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.809 [2024-07-15 19:44:11.980280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.069  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:18.069 00:06:18.337 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:18.337 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:18.337 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:18.337 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:18.337 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:18.337 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:18.337 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:18.337 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.904 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:18.904 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:18.904 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.904 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.904 [2024-07-15 19:44:12.958282] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:18.904 [2024-07-15 19:44:12.958416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62744 ] 00:06:18.904 { 00:06:18.904 "subsystems": [ 00:06:18.904 { 00:06:18.904 "subsystem": "bdev", 00:06:18.904 "config": [ 00:06:18.904 { 00:06:18.904 "params": { 00:06:18.904 "trtype": "pcie", 00:06:18.904 "traddr": "0000:00:10.0", 00:06:18.904 "name": "Nvme0" 00:06:18.904 }, 00:06:18.904 "method": "bdev_nvme_attach_controller" 00:06:18.904 }, 00:06:18.904 { 00:06:18.904 "method": "bdev_wait_for_examine" 00:06:18.904 } 00:06:18.904 ] 00:06:18.904 } 00:06:18.904 ] 00:06:18.904 } 00:06:18.904 [2024-07-15 19:44:13.098237] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.163 [2024-07-15 19:44:13.206309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.163 [2024-07-15 19:44:13.263816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.422  Copying: 56/56 [kB] (average 27 MBps) 00:06:19.422 00:06:19.422 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:19.422 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:19.422 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.422 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.422 { 00:06:19.422 "subsystems": [ 00:06:19.422 { 00:06:19.422 "subsystem": "bdev", 00:06:19.422 "config": [ 00:06:19.422 { 00:06:19.422 "params": { 00:06:19.422 "trtype": "pcie", 00:06:19.422 "traddr": "0000:00:10.0", 00:06:19.422 "name": "Nvme0" 00:06:19.422 }, 00:06:19.422 "method": "bdev_nvme_attach_controller" 00:06:19.422 }, 00:06:19.422 { 00:06:19.422 "method": "bdev_wait_for_examine" 00:06:19.422 } 00:06:19.422 ] 00:06:19.422 } 00:06:19.422 ] 00:06:19.422 } 00:06:19.422 [2024-07-15 19:44:13.659109] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:19.422 [2024-07-15 19:44:13.659227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62763 ] 00:06:19.680 [2024-07-15 19:44:13.793044] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.680 [2024-07-15 19:44:13.901852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.938 [2024-07-15 19:44:13.956670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.197  Copying: 56/56 [kB] (average 27 MBps) 00:06:20.197 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.197 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.197 [2024-07-15 19:44:14.340125] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:20.197 [2024-07-15 19:44:14.340214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62773 ] 00:06:20.197 { 00:06:20.197 "subsystems": [ 00:06:20.197 { 00:06:20.197 "subsystem": "bdev", 00:06:20.197 "config": [ 00:06:20.197 { 00:06:20.197 "params": { 00:06:20.197 "trtype": "pcie", 00:06:20.197 "traddr": "0000:00:10.0", 00:06:20.197 "name": "Nvme0" 00:06:20.197 }, 00:06:20.197 "method": "bdev_nvme_attach_controller" 00:06:20.197 }, 00:06:20.197 { 00:06:20.197 "method": "bdev_wait_for_examine" 00:06:20.197 } 00:06:20.197 ] 00:06:20.197 } 00:06:20.197 ] 00:06:20.197 } 00:06:20.458 [2024-07-15 19:44:14.477475] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.458 [2024-07-15 19:44:14.589800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.458 [2024-07-15 19:44:14.645242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.975  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:20.975 00:06:20.975 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:20.975 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:20.975 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:20.975 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:20.975 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:20.975 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:20.975 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.542 19:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:21.542 19:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:21.542 19:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:21.542 19:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.542 [2024-07-15 19:44:15.556173] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:21.542 [2024-07-15 19:44:15.556309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62803 ] 00:06:21.542 { 00:06:21.542 "subsystems": [ 00:06:21.542 { 00:06:21.542 "subsystem": "bdev", 00:06:21.542 "config": [ 00:06:21.542 { 00:06:21.542 "params": { 00:06:21.542 "trtype": "pcie", 00:06:21.542 "traddr": "0000:00:10.0", 00:06:21.542 "name": "Nvme0" 00:06:21.542 }, 00:06:21.542 "method": "bdev_nvme_attach_controller" 00:06:21.542 }, 00:06:21.542 { 00:06:21.542 "method": "bdev_wait_for_examine" 00:06:21.542 } 00:06:21.542 ] 00:06:21.542 } 00:06:21.542 ] 00:06:21.542 } 00:06:21.542 [2024-07-15 19:44:15.694088] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.800 [2024-07-15 19:44:15.796738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.801 [2024-07-15 19:44:15.853323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.090  Copying: 56/56 [kB] (average 54 MBps) 00:06:22.090 00:06:22.090 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:22.090 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:22.090 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.090 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.090 { 00:06:22.090 "subsystems": [ 00:06:22.090 { 00:06:22.090 "subsystem": "bdev", 00:06:22.090 "config": [ 00:06:22.090 { 00:06:22.090 "params": { 00:06:22.090 "trtype": "pcie", 00:06:22.090 "traddr": "0000:00:10.0", 00:06:22.090 "name": "Nvme0" 00:06:22.090 }, 00:06:22.090 "method": "bdev_nvme_attach_controller" 00:06:22.090 }, 00:06:22.090 { 00:06:22.090 "method": "bdev_wait_for_examine" 00:06:22.090 } 00:06:22.090 ] 00:06:22.090 } 00:06:22.090 ] 00:06:22.090 } 00:06:22.090 [2024-07-15 19:44:16.242940] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:22.090 [2024-07-15 19:44:16.243067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62811 ] 00:06:22.372 [2024-07-15 19:44:16.381136] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.372 [2024-07-15 19:44:16.499351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.372 [2024-07-15 19:44:16.556517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.887  Copying: 56/56 [kB] (average 54 MBps) 00:06:22.887 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.887 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.887 { 00:06:22.887 "subsystems": [ 00:06:22.887 { 00:06:22.887 "subsystem": "bdev", 00:06:22.887 "config": [ 00:06:22.887 { 00:06:22.887 "params": { 00:06:22.887 "trtype": "pcie", 00:06:22.887 "traddr": "0000:00:10.0", 00:06:22.887 "name": "Nvme0" 00:06:22.887 }, 00:06:22.887 "method": "bdev_nvme_attach_controller" 00:06:22.887 }, 00:06:22.887 { 00:06:22.887 "method": "bdev_wait_for_examine" 00:06:22.887 } 00:06:22.887 ] 00:06:22.887 } 00:06:22.887 ] 00:06:22.887 } 00:06:22.887 [2024-07-15 19:44:16.970532] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:22.887 [2024-07-15 19:44:16.970718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62832 ] 00:06:22.887 [2024-07-15 19:44:17.113289] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.146 [2024-07-15 19:44:17.227693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.146 [2024-07-15 19:44:17.284925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.404  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:23.404 00:06:23.404 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:23.404 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:23.404 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:23.404 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:23.404 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:23.404 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:23.404 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:23.404 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.013 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:24.013 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:24.013 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.013 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.013 { 00:06:24.013 "subsystems": [ 00:06:24.013 { 00:06:24.013 "subsystem": "bdev", 00:06:24.013 "config": [ 00:06:24.013 { 00:06:24.013 "params": { 00:06:24.013 "trtype": "pcie", 00:06:24.013 "traddr": "0000:00:10.0", 00:06:24.013 "name": "Nvme0" 00:06:24.013 }, 00:06:24.013 "method": "bdev_nvme_attach_controller" 00:06:24.013 }, 00:06:24.013 { 00:06:24.013 "method": "bdev_wait_for_examine" 00:06:24.013 } 00:06:24.013 ] 00:06:24.013 } 00:06:24.013 ] 00:06:24.013 } 00:06:24.013 [2024-07-15 19:44:18.118361] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:24.013 [2024-07-15 19:44:18.118459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62851 ] 00:06:24.273 [2024-07-15 19:44:18.256721] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.273 [2024-07-15 19:44:18.363493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.273 [2024-07-15 19:44:18.421024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.532  Copying: 48/48 [kB] (average 46 MBps) 00:06:24.532 00:06:24.532 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:24.532 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:24.532 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.532 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.789 [2024-07-15 19:44:18.799190] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:24.789 [2024-07-15 19:44:18.799425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62870 ] 00:06:24.789 { 00:06:24.789 "subsystems": [ 00:06:24.789 { 00:06:24.789 "subsystem": "bdev", 00:06:24.789 "config": [ 00:06:24.789 { 00:06:24.789 "params": { 00:06:24.789 "trtype": "pcie", 00:06:24.789 "traddr": "0000:00:10.0", 00:06:24.789 "name": "Nvme0" 00:06:24.789 }, 00:06:24.789 "method": "bdev_nvme_attach_controller" 00:06:24.789 }, 00:06:24.789 { 00:06:24.789 "method": "bdev_wait_for_examine" 00:06:24.789 } 00:06:24.789 ] 00:06:24.789 } 00:06:24.789 ] 00:06:24.789 } 00:06:24.789 [2024-07-15 19:44:18.929814] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.789 [2024-07-15 19:44:19.028124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.046 [2024-07-15 19:44:19.083023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.304  Copying: 48/48 [kB] (average 46 MBps) 00:06:25.304 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.305 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.305 [2024-07-15 19:44:19.491656] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:25.305 [2024-07-15 19:44:19.491746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62880 ] 00:06:25.305 { 00:06:25.305 "subsystems": [ 00:06:25.305 { 00:06:25.305 "subsystem": "bdev", 00:06:25.305 "config": [ 00:06:25.305 { 00:06:25.305 "params": { 00:06:25.305 "trtype": "pcie", 00:06:25.305 "traddr": "0000:00:10.0", 00:06:25.305 "name": "Nvme0" 00:06:25.305 }, 00:06:25.305 "method": "bdev_nvme_attach_controller" 00:06:25.305 }, 00:06:25.305 { 00:06:25.305 "method": "bdev_wait_for_examine" 00:06:25.305 } 00:06:25.305 ] 00:06:25.305 } 00:06:25.305 ] 00:06:25.305 } 00:06:25.563 [2024-07-15 19:44:19.629322] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.563 [2024-07-15 19:44:19.744735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.563 [2024-07-15 19:44:19.801184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.086  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:26.086 00:06:26.086 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:26.086 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:26.086 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:26.086 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:26.086 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:26.086 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:26.086 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.661 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:26.661 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:26.661 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.662 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.662 [2024-07-15 19:44:20.652128] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:26.662 [2024-07-15 19:44:20.652434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62905 ] 00:06:26.662 { 00:06:26.662 "subsystems": [ 00:06:26.662 { 00:06:26.662 "subsystem": "bdev", 00:06:26.662 "config": [ 00:06:26.662 { 00:06:26.662 "params": { 00:06:26.662 "trtype": "pcie", 00:06:26.662 "traddr": "0000:00:10.0", 00:06:26.662 "name": "Nvme0" 00:06:26.662 }, 00:06:26.662 "method": "bdev_nvme_attach_controller" 00:06:26.662 }, 00:06:26.662 { 00:06:26.662 "method": "bdev_wait_for_examine" 00:06:26.662 } 00:06:26.662 ] 00:06:26.662 } 00:06:26.662 ] 00:06:26.662 } 00:06:26.662 [2024-07-15 19:44:20.793359] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.662 [2024-07-15 19:44:20.888242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.920 [2024-07-15 19:44:20.945346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.178  Copying: 48/48 [kB] (average 46 MBps) 00:06:27.178 00:06:27.178 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:27.179 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:27.179 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:27.179 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.179 [2024-07-15 19:44:21.310976] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:27.179 [2024-07-15 19:44:21.311085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62918 ] 00:06:27.179 { 00:06:27.179 "subsystems": [ 00:06:27.179 { 00:06:27.179 "subsystem": "bdev", 00:06:27.179 "config": [ 00:06:27.179 { 00:06:27.179 "params": { 00:06:27.179 "trtype": "pcie", 00:06:27.179 "traddr": "0000:00:10.0", 00:06:27.179 "name": "Nvme0" 00:06:27.179 }, 00:06:27.179 "method": "bdev_nvme_attach_controller" 00:06:27.179 }, 00:06:27.179 { 00:06:27.179 "method": "bdev_wait_for_examine" 00:06:27.179 } 00:06:27.179 ] 00:06:27.179 } 00:06:27.179 ] 00:06:27.179 } 00:06:27.437 [2024-07-15 19:44:21.443820] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.437 [2024-07-15 19:44:21.549242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.437 [2024-07-15 19:44:21.605039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.955  Copying: 48/48 [kB] (average 46 MBps) 00:06:27.955 00:06:27.955 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.955 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:27.955 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:27.955 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:27.956 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:27.956 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:27.956 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:27.956 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:27.956 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:27.956 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:27.956 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.956 { 00:06:27.956 "subsystems": [ 00:06:27.956 { 00:06:27.956 "subsystem": "bdev", 00:06:27.956 "config": [ 00:06:27.956 { 00:06:27.956 "params": { 00:06:27.956 "trtype": "pcie", 00:06:27.956 "traddr": "0000:00:10.0", 00:06:27.956 "name": "Nvme0" 00:06:27.956 }, 00:06:27.956 "method": "bdev_nvme_attach_controller" 00:06:27.956 }, 00:06:27.956 { 00:06:27.956 "method": "bdev_wait_for_examine" 00:06:27.956 } 00:06:27.956 ] 00:06:27.956 } 00:06:27.956 ] 00:06:27.956 } 00:06:27.956 [2024-07-15 19:44:22.012451] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:27.956 [2024-07-15 19:44:22.012550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62939 ] 00:06:27.956 [2024-07-15 19:44:22.150756] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.215 [2024-07-15 19:44:22.249690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.215 [2024-07-15 19:44:22.303542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.473  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:28.473 00:06:28.473 00:06:28.473 real 0m15.691s 00:06:28.473 user 0m11.592s 00:06:28.473 sys 0m5.636s 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.473 ************************************ 00:06:28.473 END TEST dd_rw 00:06:28.473 ************************************ 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.473 ************************************ 00:06:28.473 START TEST dd_rw_offset 00:06:28.473 ************************************ 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:28.473 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:28.733 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:28.733 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=b9xpdhy0tohec0cqzuwqfv47w1ky5yzyg1fu28r6o6pxzuek5h55pthe8etk9ca6er4457iqn31ydm43a574zicnyd2pr0wf3pt90dnwjb2eiexx4sa4owyyt4al7sj228zz3mt1nosyzwxtjyiv0xxqfjuu6zt90gmc4k7m8cmzz08lk2u20zxny2wudtfxfpa4gngh5dqcx2k0mp7j2pxqw470la723zxj8bv6r55wz9gl039z75607m31gilmuivpu59asfhh63cx32fd5hsh7dz6btozket2513ermio4pn1m5i2l2hvsq1r479kk28x5h8ow18d1dd2junssgq9a3qgr9u8fugc834p16bv70kh8ny0msbd6mgsofy5l031ibo0n3pfpx74inqcrtjytcuat5tfi9dx1cxkupo69hun2cabem8u6c4yes7r6u5cc8malrpp9afjgkwk3rcohzi7pccpo5ptythraxijdauiog84zbkhgcv27t8nddp49flqjjx8g2am5zo48o6ozi3crweeob804o2mr1i539hguw2a3dg171i2jfl6ssrl5qy6lzd0xs1wwyn43nwr6yqxxdt0v7ze2ur6n88spp20pom3q96giktemzxig8v20pr0yg2gbtq0nndek3hi0tbpqank1cl5sqozssq16mo8yey2ddk2wrp9wytd2qjn0ehk4f36338nc2gpqroh7rkvnb1bcfekz3exg8ebgk17oc2h8929j5dai2iy1ug09u5l6o9bjce600wf7jpuvbk0wunh9920vhu9bmk0ji4vdq4jabbxpiogcfgrpj8wb8utoydahg0vin8lgp22wya58gi6657vcfb5zru0hapylqysv6lmya5dm2980ovagm49tnfxw7d338i7a3f55dpyeuctsd6ptpjdhvfosjq0m33q7tom0dlsgj2zjidpr8iqyw8iynds7dzhkpyuo38iof7ht5bzngsume7jeackouzlorczvx8n1bjqte0kmld6db06lvvk329k5lko0vpq2b2xe1h0paec03mkrngm2f0zdqlcrtj6qw8p1810xj7c9md3d7cx1y81ej2xa32dba70wzxmt1z2twu7j2g1i58hufebgrjpzbtpfpqvqb29u4aest9j8kwsjj7qp3a734smv5euk6ms2di5toiv3u2t8bpr8pa4pkgvnjtr02qw0wzaqh4rvk8a6t6tjawjvqgxezfzdqyjqfqodxy3bg6fjflknmlkmm7l9icwbfyxowfpo20qhac0ywp96kkmcbwmcddhl8781fcl2i36xd4bk8dvir0m65xn2xc4845fi27c5rht5prk99a8psoi0qlcefcc4yhqo7zhnlwzosxr9zsb2cxf1vbefja9w0tx0qvw4nouz7963ng0wmnwqyah18pmx5q22i5yoggwmi03m47mnb385y9b3hfvhc8l77wtc8areqp5wilfj5yu8llazb4h0fn1a0x93oxdipup4rb4gmzqkz3jwzk1n4t3ldbmi9mjpfw58bx0l5g2zxq9eocrudattj2ajltfcw1absif4pda08cc132wd95utqbfv21r2vgcdd370n2v1qq4ckhtkwlkwd5fnen8bsl433p7hyqj65yny1h5h2k9avbxcm7hkqlfg5lhdhh1vpjqq37hgwt8z58buu2vsxh5nb14alh8in5t55qnh9qa7004dzplxf2zvfd3qinuhrrufvhoe3qed5qaqyhm22t9jwrsiiklkmiegya3g4b709magr7wuajdxlycxqfhah5y3eqrypea6auiov3z5ftlxaxtndbrrndydqrzowkutat57omdb8yi7qlddwb8ou9o7tfdg7vstmo1efnbdelcy23envd5ejgfkd8g7jyza5ncmp2wfrnwhd09x3i64fvl1z4y3mu8970eo1qk6t4j99c9zgprgyaotikpcikyvam2bdwvjngzlwpoxvo9rzri6rlagvj219f4ifosclbqt78vtpdz0nwg7wuef8fwypq71v2znh2e2ca567g6hh9ts1mc21zvvpm2q522h29j86z8pvr3h75l0f4a27ou29k4uqpti0h2o8h798kmwm0z4l5umq82ngo04y88ri739rhsfu8ukukppxwvqj4n431m5hvqjgfedqbnfx4s8yh6ekktjdnyos6urjbze3ij2r4c8dq478zv44w3o0p9pmjg9v3m4h10hr0e4ptw693wamt9fnifhoh9g7eymd2nu4wuv1ocbvbjn45m9yx42bd3puf1ju1pvra4ixvh2u6fw75yyr0b1s2hk5od0baqojfyq27mlk85en7npgiffxqqawzshhrkxb6apvdpaw5nt25wlypo42i0v0idk5t845mmthju9f8rw1fdgtqbyzwds3yidfthtxtl7gttcj8mys5xy26wfj18uc97qp6smtdw675s4imptjfmh5z8f8rue37s7k95f1is8lpnbj04l1ph5co4eungy31b98bi1ylw9p3pbdwez8dj2zsufz2vlrz6hkugapc2s14usnfmvf319vyhltcv94cpupc2hctxgjqkyyaoddupgdytect66bjgwlkh03e58t889bnjxkwsk3h1sdh8e6x0fk5r4ckbslvd983twlhk6id95683xbc8ip4sqerfni1u82xepxukr8j2e8vn02n2a4z6y1pwzlnqdfuxzfpgai1nehxmglpftvm1n8du73mmlstxvyrday9v1pnhf44b4v605x351lkrcov8ggvt0elcme5fk1vovg3vynw6gnkmyyhcuzt6fyslxvqm0upcdgues5klrtg25pr4vkk2vdl2b0x7fev2m136txvn48uw41ddw17hmzozbf0v5umqq2e5iox01t4vf4yspk5x74mibsjxxt3eq7nxr5fbvalzbhgp945jicovhil5v2e93mzue25hsc1oz2g1xopsiiujj6cydcw8040idblowh6qpkgu2o0t59mzeehc138xs6ucarmf37gakwsh40pdtejizb78xegm0rv77zlfscaxm6hl9dr9q8ui2ohpvkyj7xaa98mfk3ltk41xzuvhbns71x0mueleh3fbfyl19xwd61zim7f591w3px2tytyxiqvim4rodnpriwqtuifrzkkwt3jvrrwv7db3nm7azwoj1rp5sfwumwxfy4ms4aafiecj8joy3lr8piqvteq9zffmkw0pcp4ifzjyl1slsx3nlct58fixx9ccs2vjv0ung71y0sm73dpqdbgr36amfbo0cjmnfr07vgeh1ium83du32e1t3epaapkkfn37fren7nwtzzc6nl319xa3gppnwduoh9wq7xv9w608kp6s5cbkuminlaeyj19vt3lie8m9g1ccumwjix05z6t74d4dx6h8gkcxpzh2rjqsj9v3kiv86154i4udfivjsfisdaa2wxvo5lbnskqmim54d3r2nqqkdhwxv04kxv4e9dg83m4vbe57em70c551az8q3fgao9jnjh9y2b6t75ni8eh0plh4ygeb5ufwi8qify3ytehkbxanawordz5t898c4e8krsxe2chd78dufhwa2apwtjggv1y4r20s3sw15ocb2f0ngjp6v5ddgrxi31brxfz9d66h9g88yj28nda9dwplekvvq0wclxb2sslip6pzzqj0p5w2oiyg83kamzv4b5hgt9j1g4i04cshv8i9nklnvw2l31mj8xby732azev9awxxlohylmmo77s4cjmw52hvj4x4isoskknuduxtret1qcsbjwg0i2cp18vv0ynlobqe5avnz98sjvx27457tr84l7vv2kd6ygkax6qjrijqajdp6a8zswotp566y6i8rl2ioh62tjwfsmcbspa20vx1zwz68gi4nxa2km8ieflowdl8ducmrjrpulfnywin1hm2018y0tzexqa2qudol6t1bm0tr9q7lhroqyznfhzrs6b01ej5frwvi7c1uf1ag0uitbgou4syd8igjxkn35uv8c914uqmh5c25xz 00:06:28.733 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:28.733 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:28.733 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:28.733 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:28.733 [2024-07-15 19:44:22.795081] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:28.733 [2024-07-15 19:44:22.795165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62970 ] 00:06:28.733 { 00:06:28.733 "subsystems": [ 00:06:28.733 { 00:06:28.733 "subsystem": "bdev", 00:06:28.733 "config": [ 00:06:28.733 { 00:06:28.733 "params": { 00:06:28.733 "trtype": "pcie", 00:06:28.733 "traddr": "0000:00:10.0", 00:06:28.733 "name": "Nvme0" 00:06:28.733 }, 00:06:28.733 "method": "bdev_nvme_attach_controller" 00:06:28.733 }, 00:06:28.733 { 00:06:28.733 "method": "bdev_wait_for_examine" 00:06:28.733 } 00:06:28.733 ] 00:06:28.733 } 00:06:28.733 ] 00:06:28.733 } 00:06:28.733 [2024-07-15 19:44:22.933519] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.992 [2024-07-15 19:44:23.029704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.992 [2024-07-15 19:44:23.086633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.249  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:29.249 00:06:29.249 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:29.249 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:29.249 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:29.249 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:29.249 { 00:06:29.249 "subsystems": [ 00:06:29.249 { 00:06:29.249 "subsystem": "bdev", 00:06:29.249 "config": [ 00:06:29.249 { 00:06:29.249 "params": { 00:06:29.249 "trtype": "pcie", 00:06:29.249 "traddr": "0000:00:10.0", 00:06:29.249 "name": "Nvme0" 00:06:29.249 }, 00:06:29.249 "method": "bdev_nvme_attach_controller" 00:06:29.249 }, 00:06:29.249 { 00:06:29.249 "method": "bdev_wait_for_examine" 00:06:29.249 } 00:06:29.249 ] 00:06:29.249 } 00:06:29.249 ] 00:06:29.249 } 00:06:29.249 [2024-07-15 19:44:23.472550] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:29.249 [2024-07-15 19:44:23.472644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62984 ] 00:06:29.508 [2024-07-15 19:44:23.610874] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.508 [2024-07-15 19:44:23.721269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.766 [2024-07-15 19:44:23.778293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.023  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:30.023 00:06:30.023 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ b9xpdhy0tohec0cqzuwqfv47w1ky5yzyg1fu28r6o6pxzuek5h55pthe8etk9ca6er4457iqn31ydm43a574zicnyd2pr0wf3pt90dnwjb2eiexx4sa4owyyt4al7sj228zz3mt1nosyzwxtjyiv0xxqfjuu6zt90gmc4k7m8cmzz08lk2u20zxny2wudtfxfpa4gngh5dqcx2k0mp7j2pxqw470la723zxj8bv6r55wz9gl039z75607m31gilmuivpu59asfhh63cx32fd5hsh7dz6btozket2513ermio4pn1m5i2l2hvsq1r479kk28x5h8ow18d1dd2junssgq9a3qgr9u8fugc834p16bv70kh8ny0msbd6mgsofy5l031ibo0n3pfpx74inqcrtjytcuat5tfi9dx1cxkupo69hun2cabem8u6c4yes7r6u5cc8malrpp9afjgkwk3rcohzi7pccpo5ptythraxijdauiog84zbkhgcv27t8nddp49flqjjx8g2am5zo48o6ozi3crweeob804o2mr1i539hguw2a3dg171i2jfl6ssrl5qy6lzd0xs1wwyn43nwr6yqxxdt0v7ze2ur6n88spp20pom3q96giktemzxig8v20pr0yg2gbtq0nndek3hi0tbpqank1cl5sqozssq16mo8yey2ddk2wrp9wytd2qjn0ehk4f36338nc2gpqroh7rkvnb1bcfekz3exg8ebgk17oc2h8929j5dai2iy1ug09u5l6o9bjce600wf7jpuvbk0wunh9920vhu9bmk0ji4vdq4jabbxpiogcfgrpj8wb8utoydahg0vin8lgp22wya58gi6657vcfb5zru0hapylqysv6lmya5dm2980ovagm49tnfxw7d338i7a3f55dpyeuctsd6ptpjdhvfosjq0m33q7tom0dlsgj2zjidpr8iqyw8iynds7dzhkpyuo38iof7ht5bzngsume7jeackouzlorczvx8n1bjqte0kmld6db06lvvk329k5lko0vpq2b2xe1h0paec03mkrngm2f0zdqlcrtj6qw8p1810xj7c9md3d7cx1y81ej2xa32dba70wzxmt1z2twu7j2g1i58hufebgrjpzbtpfpqvqb29u4aest9j8kwsjj7qp3a734smv5euk6ms2di5toiv3u2t8bpr8pa4pkgvnjtr02qw0wzaqh4rvk8a6t6tjawjvqgxezfzdqyjqfqodxy3bg6fjflknmlkmm7l9icwbfyxowfpo20qhac0ywp96kkmcbwmcddhl8781fcl2i36xd4bk8dvir0m65xn2xc4845fi27c5rht5prk99a8psoi0qlcefcc4yhqo7zhnlwzosxr9zsb2cxf1vbefja9w0tx0qvw4nouz7963ng0wmnwqyah18pmx5q22i5yoggwmi03m47mnb385y9b3hfvhc8l77wtc8areqp5wilfj5yu8llazb4h0fn1a0x93oxdipup4rb4gmzqkz3jwzk1n4t3ldbmi9mjpfw58bx0l5g2zxq9eocrudattj2ajltfcw1absif4pda08cc132wd95utqbfv21r2vgcdd370n2v1qq4ckhtkwlkwd5fnen8bsl433p7hyqj65yny1h5h2k9avbxcm7hkqlfg5lhdhh1vpjqq37hgwt8z58buu2vsxh5nb14alh8in5t55qnh9qa7004dzplxf2zvfd3qinuhrrufvhoe3qed5qaqyhm22t9jwrsiiklkmiegya3g4b709magr7wuajdxlycxqfhah5y3eqrypea6auiov3z5ftlxaxtndbrrndydqrzowkutat57omdb8yi7qlddwb8ou9o7tfdg7vstmo1efnbdelcy23envd5ejgfkd8g7jyza5ncmp2wfrnwhd09x3i64fvl1z4y3mu8970eo1qk6t4j99c9zgprgyaotikpcikyvam2bdwvjngzlwpoxvo9rzri6rlagvj219f4ifosclbqt78vtpdz0nwg7wuef8fwypq71v2znh2e2ca567g6hh9ts1mc21zvvpm2q522h29j86z8pvr3h75l0f4a27ou29k4uqpti0h2o8h798kmwm0z4l5umq82ngo04y88ri739rhsfu8ukukppxwvqj4n431m5hvqjgfedqbnfx4s8yh6ekktjdnyos6urjbze3ij2r4c8dq478zv44w3o0p9pmjg9v3m4h10hr0e4ptw693wamt9fnifhoh9g7eymd2nu4wuv1ocbvbjn45m9yx42bd3puf1ju1pvra4ixvh2u6fw75yyr0b1s2hk5od0baqojfyq27mlk85en7npgiffxqqawzshhrkxb6apvdpaw5nt25wlypo42i0v0idk5t845mmthju9f8rw1fdgtqbyzwds3yidfthtxtl7gttcj8mys5xy26wfj18uc97qp6smtdw675s4imptjfmh5z8f8rue37s7k95f1is8lpnbj04l1ph5co4eungy31b98bi1ylw9p3pbdwez8dj2zsufz2vlrz6hkugapc2s14usnfmvf319vyhltcv94cpupc2hctxgjqkyyaoddupgdytect66bjgwlkh03e58t889bnjxkwsk3h1sdh8e6x0fk5r4ckbslvd983twlhk6id95683xbc8ip4sqerfni1u82xepxukr8j2e8vn02n2a4z6y1pwzlnqdfuxzfpgai1nehxmglpftvm1n8du73mmlstxvyrday9v1pnhf44b4v605x351lkrcov8ggvt0elcme5fk1vovg3vynw6gnkmyyhcuzt6fyslxvqm0upcdgues5klrtg25pr4vkk2vdl2b0x7fev2m136txvn48uw41ddw17hmzozbf0v5umqq2e5iox01t4vf4yspk5x74mibsjxxt3eq7nxr5fbvalzbhgp945jicovhil5v2e93mzue25hsc1oz2g1xopsiiujj6cydcw8040idblowh6qpkgu2o0t59mzeehc138xs6ucarmf37gakwsh40pdtejizb78xegm0rv77zlfscaxm6hl9dr9q8ui2ohpvkyj7xaa98mfk3ltk41xzuvhbns71x0mueleh3fbfyl19xwd61zim7f591w3px2tytyxiqvim4rodnpriwqtuifrzkkwt3jvrrwv7db3nm7azwoj1rp5sfwumwxfy4ms4aafiecj8joy3lr8piqvteq9zffmkw0pcp4ifzjyl1slsx3nlct58fixx9ccs2vjv0ung71y0sm73dpqdbgr36amfbo0cjmnfr07vgeh1ium83du32e1t3epaapkkfn37fren7nwtzzc6nl319xa3gppnwduoh9wq7xv9w608kp6s5cbkuminlaeyj19vt3lie8m9g1ccumwjix05z6t74d4dx6h8gkcxpzh2rjqsj9v3kiv86154i4udfivjsfisdaa2wxvo5lbnskqmim54d3r2nqqkdhwxv04kxv4e9dg83m4vbe57em70c551az8q3fgao9jnjh9y2b6t75ni8eh0plh4ygeb5ufwi8qify3ytehkbxanawordz5t898c4e8krsxe2chd78dufhwa2apwtjggv1y4r20s3sw15ocb2f0ngjp6v5ddgrxi31brxfz9d66h9g88yj28nda9dwplekvvq0wclxb2sslip6pzzqj0p5w2oiyg83kamzv4b5hgt9j1g4i04cshv8i9nklnvw2l31mj8xby732azev9awxxlohylmmo77s4cjmw52hvj4x4isoskknuduxtret1qcsbjwg0i2cp18vv0ynlobqe5avnz98sjvx27457tr84l7vv2kd6ygkax6qjrijqajdp6a8zswotp566y6i8rl2ioh62tjwfsmcbspa20vx1zwz68gi4nxa2km8ieflowdl8ducmrjrpulfnywin1hm2018y0tzexqa2qudol6t1bm0tr9q7lhroqyznfhzrs6b01ej5frwvi7c1uf1ag0uitbgou4syd8igjxkn35uv8c914uqmh5c25xz == \b\9\x\p\d\h\y\0\t\o\h\e\c\0\c\q\z\u\w\q\f\v\4\7\w\1\k\y\5\y\z\y\g\1\f\u\2\8\r\6\o\6\p\x\z\u\e\k\5\h\5\5\p\t\h\e\8\e\t\k\9\c\a\6\e\r\4\4\5\7\i\q\n\3\1\y\d\m\4\3\a\5\7\4\z\i\c\n\y\d\2\p\r\0\w\f\3\p\t\9\0\d\n\w\j\b\2\e\i\e\x\x\4\s\a\4\o\w\y\y\t\4\a\l\7\s\j\2\2\8\z\z\3\m\t\1\n\o\s\y\z\w\x\t\j\y\i\v\0\x\x\q\f\j\u\u\6\z\t\9\0\g\m\c\4\k\7\m\8\c\m\z\z\0\8\l\k\2\u\2\0\z\x\n\y\2\w\u\d\t\f\x\f\p\a\4\g\n\g\h\5\d\q\c\x\2\k\0\m\p\7\j\2\p\x\q\w\4\7\0\l\a\7\2\3\z\x\j\8\b\v\6\r\5\5\w\z\9\g\l\0\3\9\z\7\5\6\0\7\m\3\1\g\i\l\m\u\i\v\p\u\5\9\a\s\f\h\h\6\3\c\x\3\2\f\d\5\h\s\h\7\d\z\6\b\t\o\z\k\e\t\2\5\1\3\e\r\m\i\o\4\p\n\1\m\5\i\2\l\2\h\v\s\q\1\r\4\7\9\k\k\2\8\x\5\h\8\o\w\1\8\d\1\d\d\2\j\u\n\s\s\g\q\9\a\3\q\g\r\9\u\8\f\u\g\c\8\3\4\p\1\6\b\v\7\0\k\h\8\n\y\0\m\s\b\d\6\m\g\s\o\f\y\5\l\0\3\1\i\b\o\0\n\3\p\f\p\x\7\4\i\n\q\c\r\t\j\y\t\c\u\a\t\5\t\f\i\9\d\x\1\c\x\k\u\p\o\6\9\h\u\n\2\c\a\b\e\m\8\u\6\c\4\y\e\s\7\r\6\u\5\c\c\8\m\a\l\r\p\p\9\a\f\j\g\k\w\k\3\r\c\o\h\z\i\7\p\c\c\p\o\5\p\t\y\t\h\r\a\x\i\j\d\a\u\i\o\g\8\4\z\b\k\h\g\c\v\2\7\t\8\n\d\d\p\4\9\f\l\q\j\j\x\8\g\2\a\m\5\z\o\4\8\o\6\o\z\i\3\c\r\w\e\e\o\b\8\0\4\o\2\m\r\1\i\5\3\9\h\g\u\w\2\a\3\d\g\1\7\1\i\2\j\f\l\6\s\s\r\l\5\q\y\6\l\z\d\0\x\s\1\w\w\y\n\4\3\n\w\r\6\y\q\x\x\d\t\0\v\7\z\e\2\u\r\6\n\8\8\s\p\p\2\0\p\o\m\3\q\9\6\g\i\k\t\e\m\z\x\i\g\8\v\2\0\p\r\0\y\g\2\g\b\t\q\0\n\n\d\e\k\3\h\i\0\t\b\p\q\a\n\k\1\c\l\5\s\q\o\z\s\s\q\1\6\m\o\8\y\e\y\2\d\d\k\2\w\r\p\9\w\y\t\d\2\q\j\n\0\e\h\k\4\f\3\6\3\3\8\n\c\2\g\p\q\r\o\h\7\r\k\v\n\b\1\b\c\f\e\k\z\3\e\x\g\8\e\b\g\k\1\7\o\c\2\h\8\9\2\9\j\5\d\a\i\2\i\y\1\u\g\0\9\u\5\l\6\o\9\b\j\c\e\6\0\0\w\f\7\j\p\u\v\b\k\0\w\u\n\h\9\9\2\0\v\h\u\9\b\m\k\0\j\i\4\v\d\q\4\j\a\b\b\x\p\i\o\g\c\f\g\r\p\j\8\w\b\8\u\t\o\y\d\a\h\g\0\v\i\n\8\l\g\p\2\2\w\y\a\5\8\g\i\6\6\5\7\v\c\f\b\5\z\r\u\0\h\a\p\y\l\q\y\s\v\6\l\m\y\a\5\d\m\2\9\8\0\o\v\a\g\m\4\9\t\n\f\x\w\7\d\3\3\8\i\7\a\3\f\5\5\d\p\y\e\u\c\t\s\d\6\p\t\p\j\d\h\v\f\o\s\j\q\0\m\3\3\q\7\t\o\m\0\d\l\s\g\j\2\z\j\i\d\p\r\8\i\q\y\w\8\i\y\n\d\s\7\d\z\h\k\p\y\u\o\3\8\i\o\f\7\h\t\5\b\z\n\g\s\u\m\e\7\j\e\a\c\k\o\u\z\l\o\r\c\z\v\x\8\n\1\b\j\q\t\e\0\k\m\l\d\6\d\b\0\6\l\v\v\k\3\2\9\k\5\l\k\o\0\v\p\q\2\b\2\x\e\1\h\0\p\a\e\c\0\3\m\k\r\n\g\m\2\f\0\z\d\q\l\c\r\t\j\6\q\w\8\p\1\8\1\0\x\j\7\c\9\m\d\3\d\7\c\x\1\y\8\1\e\j\2\x\a\3\2\d\b\a\7\0\w\z\x\m\t\1\z\2\t\w\u\7\j\2\g\1\i\5\8\h\u\f\e\b\g\r\j\p\z\b\t\p\f\p\q\v\q\b\2\9\u\4\a\e\s\t\9\j\8\k\w\s\j\j\7\q\p\3\a\7\3\4\s\m\v\5\e\u\k\6\m\s\2\d\i\5\t\o\i\v\3\u\2\t\8\b\p\r\8\p\a\4\p\k\g\v\n\j\t\r\0\2\q\w\0\w\z\a\q\h\4\r\v\k\8\a\6\t\6\t\j\a\w\j\v\q\g\x\e\z\f\z\d\q\y\j\q\f\q\o\d\x\y\3\b\g\6\f\j\f\l\k\n\m\l\k\m\m\7\l\9\i\c\w\b\f\y\x\o\w\f\p\o\2\0\q\h\a\c\0\y\w\p\9\6\k\k\m\c\b\w\m\c\d\d\h\l\8\7\8\1\f\c\l\2\i\3\6\x\d\4\b\k\8\d\v\i\r\0\m\6\5\x\n\2\x\c\4\8\4\5\f\i\2\7\c\5\r\h\t\5\p\r\k\9\9\a\8\p\s\o\i\0\q\l\c\e\f\c\c\4\y\h\q\o\7\z\h\n\l\w\z\o\s\x\r\9\z\s\b\2\c\x\f\1\v\b\e\f\j\a\9\w\0\t\x\0\q\v\w\4\n\o\u\z\7\9\6\3\n\g\0\w\m\n\w\q\y\a\h\1\8\p\m\x\5\q\2\2\i\5\y\o\g\g\w\m\i\0\3\m\4\7\m\n\b\3\8\5\y\9\b\3\h\f\v\h\c\8\l\7\7\w\t\c\8\a\r\e\q\p\5\w\i\l\f\j\5\y\u\8\l\l\a\z\b\4\h\0\f\n\1\a\0\x\9\3\o\x\d\i\p\u\p\4\r\b\4\g\m\z\q\k\z\3\j\w\z\k\1\n\4\t\3\l\d\b\m\i\9\m\j\p\f\w\5\8\b\x\0\l\5\g\2\z\x\q\9\e\o\c\r\u\d\a\t\t\j\2\a\j\l\t\f\c\w\1\a\b\s\i\f\4\p\d\a\0\8\c\c\1\3\2\w\d\9\5\u\t\q\b\f\v\2\1\r\2\v\g\c\d\d\3\7\0\n\2\v\1\q\q\4\c\k\h\t\k\w\l\k\w\d\5\f\n\e\n\8\b\s\l\4\3\3\p\7\h\y\q\j\6\5\y\n\y\1\h\5\h\2\k\9\a\v\b\x\c\m\7\h\k\q\l\f\g\5\l\h\d\h\h\1\v\p\j\q\q\3\7\h\g\w\t\8\z\5\8\b\u\u\2\v\s\x\h\5\n\b\1\4\a\l\h\8\i\n\5\t\5\5\q\n\h\9\q\a\7\0\0\4\d\z\p\l\x\f\2\z\v\f\d\3\q\i\n\u\h\r\r\u\f\v\h\o\e\3\q\e\d\5\q\a\q\y\h\m\2\2\t\9\j\w\r\s\i\i\k\l\k\m\i\e\g\y\a\3\g\4\b\7\0\9\m\a\g\r\7\w\u\a\j\d\x\l\y\c\x\q\f\h\a\h\5\y\3\e\q\r\y\p\e\a\6\a\u\i\o\v\3\z\5\f\t\l\x\a\x\t\n\d\b\r\r\n\d\y\d\q\r\z\o\w\k\u\t\a\t\5\7\o\m\d\b\8\y\i\7\q\l\d\d\w\b\8\o\u\9\o\7\t\f\d\g\7\v\s\t\m\o\1\e\f\n\b\d\e\l\c\y\2\3\e\n\v\d\5\e\j\g\f\k\d\8\g\7\j\y\z\a\5\n\c\m\p\2\w\f\r\n\w\h\d\0\9\x\3\i\6\4\f\v\l\1\z\4\y\3\m\u\8\9\7\0\e\o\1\q\k\6\t\4\j\9\9\c\9\z\g\p\r\g\y\a\o\t\i\k\p\c\i\k\y\v\a\m\2\b\d\w\v\j\n\g\z\l\w\p\o\x\v\o\9\r\z\r\i\6\r\l\a\g\v\j\2\1\9\f\4\i\f\o\s\c\l\b\q\t\7\8\v\t\p\d\z\0\n\w\g\7\w\u\e\f\8\f\w\y\p\q\7\1\v\2\z\n\h\2\e\2\c\a\5\6\7\g\6\h\h\9\t\s\1\m\c\2\1\z\v\v\p\m\2\q\5\2\2\h\2\9\j\8\6\z\8\p\v\r\3\h\7\5\l\0\f\4\a\2\7\o\u\2\9\k\4\u\q\p\t\i\0\h\2\o\8\h\7\9\8\k\m\w\m\0\z\4\l\5\u\m\q\8\2\n\g\o\0\4\y\8\8\r\i\7\3\9\r\h\s\f\u\8\u\k\u\k\p\p\x\w\v\q\j\4\n\4\3\1\m\5\h\v\q\j\g\f\e\d\q\b\n\f\x\4\s\8\y\h\6\e\k\k\t\j\d\n\y\o\s\6\u\r\j\b\z\e\3\i\j\2\r\4\c\8\d\q\4\7\8\z\v\4\4\w\3\o\0\p\9\p\m\j\g\9\v\3\m\4\h\1\0\h\r\0\e\4\p\t\w\6\9\3\w\a\m\t\9\f\n\i\f\h\o\h\9\g\7\e\y\m\d\2\n\u\4\w\u\v\1\o\c\b\v\b\j\n\4\5\m\9\y\x\4\2\b\d\3\p\u\f\1\j\u\1\p\v\r\a\4\i\x\v\h\2\u\6\f\w\7\5\y\y\r\0\b\1\s\2\h\k\5\o\d\0\b\a\q\o\j\f\y\q\2\7\m\l\k\8\5\e\n\7\n\p\g\i\f\f\x\q\q\a\w\z\s\h\h\r\k\x\b\6\a\p\v\d\p\a\w\5\n\t\2\5\w\l\y\p\o\4\2\i\0\v\0\i\d\k\5\t\8\4\5\m\m\t\h\j\u\9\f\8\r\w\1\f\d\g\t\q\b\y\z\w\d\s\3\y\i\d\f\t\h\t\x\t\l\7\g\t\t\c\j\8\m\y\s\5\x\y\2\6\w\f\j\1\8\u\c\9\7\q\p\6\s\m\t\d\w\6\7\5\s\4\i\m\p\t\j\f\m\h\5\z\8\f\8\r\u\e\3\7\s\7\k\9\5\f\1\i\s\8\l\p\n\b\j\0\4\l\1\p\h\5\c\o\4\e\u\n\g\y\3\1\b\9\8\b\i\1\y\l\w\9\p\3\p\b\d\w\e\z\8\d\j\2\z\s\u\f\z\2\v\l\r\z\6\h\k\u\g\a\p\c\2\s\1\4\u\s\n\f\m\v\f\3\1\9\v\y\h\l\t\c\v\9\4\c\p\u\p\c\2\h\c\t\x\g\j\q\k\y\y\a\o\d\d\u\p\g\d\y\t\e\c\t\6\6\b\j\g\w\l\k\h\0\3\e\5\8\t\8\8\9\b\n\j\x\k\w\s\k\3\h\1\s\d\h\8\e\6\x\0\f\k\5\r\4\c\k\b\s\l\v\d\9\8\3\t\w\l\h\k\6\i\d\9\5\6\8\3\x\b\c\8\i\p\4\s\q\e\r\f\n\i\1\u\8\2\x\e\p\x\u\k\r\8\j\2\e\8\v\n\0\2\n\2\a\4\z\6\y\1\p\w\z\l\n\q\d\f\u\x\z\f\p\g\a\i\1\n\e\h\x\m\g\l\p\f\t\v\m\1\n\8\d\u\7\3\m\m\l\s\t\x\v\y\r\d\a\y\9\v\1\p\n\h\f\4\4\b\4\v\6\0\5\x\3\5\1\l\k\r\c\o\v\8\g\g\v\t\0\e\l\c\m\e\5\f\k\1\v\o\v\g\3\v\y\n\w\6\g\n\k\m\y\y\h\c\u\z\t\6\f\y\s\l\x\v\q\m\0\u\p\c\d\g\u\e\s\5\k\l\r\t\g\2\5\p\r\4\v\k\k\2\v\d\l\2\b\0\x\7\f\e\v\2\m\1\3\6\t\x\v\n\4\8\u\w\4\1\d\d\w\1\7\h\m\z\o\z\b\f\0\v\5\u\m\q\q\2\e\5\i\o\x\0\1\t\4\v\f\4\y\s\p\k\5\x\7\4\m\i\b\s\j\x\x\t\3\e\q\7\n\x\r\5\f\b\v\a\l\z\b\h\g\p\9\4\5\j\i\c\o\v\h\i\l\5\v\2\e\9\3\m\z\u\e\2\5\h\s\c\1\o\z\2\g\1\x\o\p\s\i\i\u\j\j\6\c\y\d\c\w\8\0\4\0\i\d\b\l\o\w\h\6\q\p\k\g\u\2\o\0\t\5\9\m\z\e\e\h\c\1\3\8\x\s\6\u\c\a\r\m\f\3\7\g\a\k\w\s\h\4\0\p\d\t\e\j\i\z\b\7\8\x\e\g\m\0\r\v\7\7\z\l\f\s\c\a\x\m\6\h\l\9\d\r\9\q\8\u\i\2\o\h\p\v\k\y\j\7\x\a\a\9\8\m\f\k\3\l\t\k\4\1\x\z\u\v\h\b\n\s\7\1\x\0\m\u\e\l\e\h\3\f\b\f\y\l\1\9\x\w\d\6\1\z\i\m\7\f\5\9\1\w\3\p\x\2\t\y\t\y\x\i\q\v\i\m\4\r\o\d\n\p\r\i\w\q\t\u\i\f\r\z\k\k\w\t\3\j\v\r\r\w\v\7\d\b\3\n\m\7\a\z\w\o\j\1\r\p\5\s\f\w\u\m\w\x\f\y\4\m\s\4\a\a\f\i\e\c\j\8\j\o\y\3\l\r\8\p\i\q\v\t\e\q\9\z\f\f\m\k\w\0\p\c\p\4\i\f\z\j\y\l\1\s\l\s\x\3\n\l\c\t\5\8\f\i\x\x\9\c\c\s\2\v\j\v\0\u\n\g\7\1\y\0\s\m\7\3\d\p\q\d\b\g\r\3\6\a\m\f\b\o\0\c\j\m\n\f\r\0\7\v\g\e\h\1\i\u\m\8\3\d\u\3\2\e\1\t\3\e\p\a\a\p\k\k\f\n\3\7\f\r\e\n\7\n\w\t\z\z\c\6\n\l\3\1\9\x\a\3\g\p\p\n\w\d\u\o\h\9\w\q\7\x\v\9\w\6\0\8\k\p\6\s\5\c\b\k\u\m\i\n\l\a\e\y\j\1\9\v\t\3\l\i\e\8\m\9\g\1\c\c\u\m\w\j\i\x\0\5\z\6\t\7\4\d\4\d\x\6\h\8\g\k\c\x\p\z\h\2\r\j\q\s\j\9\v\3\k\i\v\8\6\1\5\4\i\4\u\d\f\i\v\j\s\f\i\s\d\a\a\2\w\x\v\o\5\l\b\n\s\k\q\m\i\m\5\4\d\3\r\2\n\q\q\k\d\h\w\x\v\0\4\k\x\v\4\e\9\d\g\8\3\m\4\v\b\e\5\7\e\m\7\0\c\5\5\1\a\z\8\q\3\f\g\a\o\9\j\n\j\h\9\y\2\b\6\t\7\5\n\i\8\e\h\0\p\l\h\4\y\g\e\b\5\u\f\w\i\8\q\i\f\y\3\y\t\e\h\k\b\x\a\n\a\w\o\r\d\z\5\t\8\9\8\c\4\e\8\k\r\s\x\e\2\c\h\d\7\8\d\u\f\h\w\a\2\a\p\w\t\j\g\g\v\1\y\4\r\2\0\s\3\s\w\1\5\o\c\b\2\f\0\n\g\j\p\6\v\5\d\d\g\r\x\i\3\1\b\r\x\f\z\9\d\6\6\h\9\g\8\8\y\j\2\8\n\d\a\9\d\w\p\l\e\k\v\v\q\0\w\c\l\x\b\2\s\s\l\i\p\6\p\z\z\q\j\0\p\5\w\2\o\i\y\g\8\3\k\a\m\z\v\4\b\5\h\g\t\9\j\1\g\4\i\0\4\c\s\h\v\8\i\9\n\k\l\n\v\w\2\l\3\1\m\j\8\x\b\y\7\3\2\a\z\e\v\9\a\w\x\x\l\o\h\y\l\m\m\o\7\7\s\4\c\j\m\w\5\2\h\v\j\4\x\4\i\s\o\s\k\k\n\u\d\u\x\t\r\e\t\1\q\c\s\b\j\w\g\0\i\2\c\p\1\8\v\v\0\y\n\l\o\b\q\e\5\a\v\n\z\9\8\s\j\v\x\2\7\4\5\7\t\r\8\4\l\7\v\v\2\k\d\6\y\g\k\a\x\6\q\j\r\i\j\q\a\j\d\p\6\a\8\z\s\w\o\t\p\5\6\6\y\6\i\8\r\l\2\i\o\h\6\2\t\j\w\f\s\m\c\b\s\p\a\2\0\v\x\1\z\w\z\6\8\g\i\4\n\x\a\2\k\m\8\i\e\f\l\o\w\d\l\8\d\u\c\m\r\j\r\p\u\l\f\n\y\w\i\n\1\h\m\2\0\1\8\y\0\t\z\e\x\q\a\2\q\u\d\o\l\6\t\1\b\m\0\t\r\9\q\7\l\h\r\o\q\y\z\n\f\h\z\r\s\6\b\0\1\e\j\5\f\r\w\v\i\7\c\1\u\f\1\a\g\0\u\i\t\b\g\o\u\4\s\y\d\8\i\g\j\x\k\n\3\5\u\v\8\c\9\1\4\u\q\m\h\5\c\2\5\x\z ]] 00:06:30.024 ************************************ 00:06:30.024 END TEST dd_rw_offset 00:06:30.024 ************************************ 00:06:30.024 00:06:30.024 real 0m1.427s 00:06:30.024 user 0m0.984s 00:06:30.024 sys 0m0.617s 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.024 19:44:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.024 [2024-07-15 19:44:24.208497] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:30.024 [2024-07-15 19:44:24.208595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63019 ] 00:06:30.024 { 00:06:30.024 "subsystems": [ 00:06:30.024 { 00:06:30.024 "subsystem": "bdev", 00:06:30.024 "config": [ 00:06:30.024 { 00:06:30.024 "params": { 00:06:30.024 "trtype": "pcie", 00:06:30.024 "traddr": "0000:00:10.0", 00:06:30.024 "name": "Nvme0" 00:06:30.024 }, 00:06:30.024 "method": "bdev_nvme_attach_controller" 00:06:30.024 }, 00:06:30.024 { 00:06:30.024 "method": "bdev_wait_for_examine" 00:06:30.024 } 00:06:30.024 ] 00:06:30.024 } 00:06:30.024 ] 00:06:30.024 } 00:06:30.282 [2024-07-15 19:44:24.342015] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.282 [2024-07-15 19:44:24.432049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.282 [2024-07-15 19:44:24.484446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.800  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:30.800 00:06:30.800 19:44:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.800 00:06:30.800 real 0m18.946s 00:06:30.800 user 0m13.685s 00:06:30.800 sys 0m6.911s 00:06:30.800 19:44:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.800 19:44:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.800 ************************************ 00:06:30.800 END TEST spdk_dd_basic_rw 00:06:30.800 ************************************ 00:06:30.800 19:44:24 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:30.800 19:44:24 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:30.800 19:44:24 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.800 19:44:24 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.800 19:44:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:30.800 ************************************ 00:06:30.800 START TEST spdk_dd_posix 00:06:30.800 ************************************ 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:30.800 * Looking for test storage... 00:06:30.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:30.800 19:44:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:30.801 * First test run, liburing in use 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:30.801 ************************************ 00:06:30.801 START TEST dd_flag_append 00:06:30.801 ************************************ 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=winw4756zg8wz3wu9mwu5uxrq2o8o1jr 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=sl0k67um9nzpoqoi5yw64m2inrp992gq 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s winw4756zg8wz3wu9mwu5uxrq2o8o1jr 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s sl0k67um9nzpoqoi5yw64m2inrp992gq 00:06:30.801 19:44:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:30.801 [2024-07-15 19:44:25.034971] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:30.801 [2024-07-15 19:44:25.035093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63086 ] 00:06:31.060 [2024-07-15 19:44:25.169366] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.060 [2024-07-15 19:44:25.277237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.319 [2024-07-15 19:44:25.331978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.578  Copying: 32/32 [B] (average 31 kBps) 00:06:31.578 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ sl0k67um9nzpoqoi5yw64m2inrp992gqwinw4756zg8wz3wu9mwu5uxrq2o8o1jr == \s\l\0\k\6\7\u\m\9\n\z\p\o\q\o\i\5\y\w\6\4\m\2\i\n\r\p\9\9\2\g\q\w\i\n\w\4\7\5\6\z\g\8\w\z\3\w\u\9\m\w\u\5\u\x\r\q\2\o\8\o\1\j\r ]] 00:06:31.578 00:06:31.578 real 0m0.603s 00:06:31.578 user 0m0.338s 00:06:31.578 sys 0m0.285s 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:31.578 ************************************ 00:06:31.578 END TEST dd_flag_append 00:06:31.578 ************************************ 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:31.578 ************************************ 00:06:31.578 START TEST dd_flag_directory 00:06:31.578 ************************************ 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:31.578 19:44:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.578 [2024-07-15 19:44:25.684218] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:31.578 [2024-07-15 19:44:25.684401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63115 ] 00:06:31.837 [2024-07-15 19:44:25.823702] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.837 [2024-07-15 19:44:25.935987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.837 [2024-07-15 19:44:25.990805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.837 [2024-07-15 19:44:26.025791] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:31.837 [2024-07-15 19:44:26.025863] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:31.837 [2024-07-15 19:44:26.025893] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.096 [2024-07-15 19:44:26.139962] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.096 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:32.096 [2024-07-15 19:44:26.294811] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:32.096 [2024-07-15 19:44:26.294964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63124 ] 00:06:32.355 [2024-07-15 19:44:26.434721] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.355 [2024-07-15 19:44:26.551974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.614 [2024-07-15 19:44:26.608027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.614 [2024-07-15 19:44:26.643459] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:32.614 [2024-07-15 19:44:26.643517] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:32.614 [2024-07-15 19:44:26.643533] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.614 [2024-07-15 19:44:26.759193] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.873 00:06:32.873 real 0m1.232s 00:06:32.873 user 0m0.718s 00:06:32.873 sys 0m0.303s 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:32.873 ************************************ 00:06:32.873 END TEST dd_flag_directory 00:06:32.873 ************************************ 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:32.873 ************************************ 00:06:32.873 START TEST dd_flag_nofollow 00:06:32.873 ************************************ 00:06:32.873 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.874 19:44:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.874 [2024-07-15 19:44:26.976846] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:32.874 [2024-07-15 19:44:26.977532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63153 ] 00:06:33.133 [2024-07-15 19:44:27.117763] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.133 [2024-07-15 19:44:27.229443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.133 [2024-07-15 19:44:27.282986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.133 [2024-07-15 19:44:27.314893] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:33.133 [2024-07-15 19:44:27.314965] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:33.133 [2024-07-15 19:44:27.314995] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.393 [2024-07-15 19:44:27.424590] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.393 19:44:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:33.393 [2024-07-15 19:44:27.578409] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:33.393 [2024-07-15 19:44:27.578513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63162 ] 00:06:33.652 [2024-07-15 19:44:27.718187] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.652 [2024-07-15 19:44:27.834971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.652 [2024-07-15 19:44:27.891003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.912 [2024-07-15 19:44:27.926727] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:33.912 [2024-07-15 19:44:27.926799] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:33.912 [2024-07-15 19:44:27.926815] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.912 [2024-07-15 19:44:28.042343] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:33.912 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.170 [2024-07-15 19:44:28.192557] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:34.171 [2024-07-15 19:44:28.192649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63175 ] 00:06:34.171 [2024-07-15 19:44:28.322708] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.429 [2024-07-15 19:44:28.431558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.429 [2024-07-15 19:44:28.486381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.689  Copying: 512/512 [B] (average 500 kBps) 00:06:34.689 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ mrvov8j2ujl0x97eqzvxue9dz06t2l4q3jczxi0nhs7mzjrcvsoy8a9iq1c8g056m6ljnkpqga1h13jaxjkjagx4eal5ceagd4p8kizicgy0u93vugshovr2e3qkni9n455b05xzodhe201680dr3yvzk8asuir3o4tjdireh289d1rkbb172km8p00k3a248pmkr8rxraprpxh48639iynhe53v118bo9iy39tmkwjs0ivsdn0y0cjcjwwyu71lr1q7sbsjhfatx3l8z1zbk03t1q2jj10zo0v0aifqqzs0zfugdz90egu59qw73irjdumuh1pdaoymhyx3nymlw52a9au6og7k4mw1am5gzayxem586dwaak3n6hn2e895bitg5dmf7ni1hdop5y7wko75dgrmvsw19n9zgy8fwydrw5qv2hldm9yva3pdmethkexu0oelmw4sr6e4ecejpizg9box7mpkbe3bpkhy653tu0hjhhc03bz9msyagzt6 == \m\r\v\o\v\8\j\2\u\j\l\0\x\9\7\e\q\z\v\x\u\e\9\d\z\0\6\t\2\l\4\q\3\j\c\z\x\i\0\n\h\s\7\m\z\j\r\c\v\s\o\y\8\a\9\i\q\1\c\8\g\0\5\6\m\6\l\j\n\k\p\q\g\a\1\h\1\3\j\a\x\j\k\j\a\g\x\4\e\a\l\5\c\e\a\g\d\4\p\8\k\i\z\i\c\g\y\0\u\9\3\v\u\g\s\h\o\v\r\2\e\3\q\k\n\i\9\n\4\5\5\b\0\5\x\z\o\d\h\e\2\0\1\6\8\0\d\r\3\y\v\z\k\8\a\s\u\i\r\3\o\4\t\j\d\i\r\e\h\2\8\9\d\1\r\k\b\b\1\7\2\k\m\8\p\0\0\k\3\a\2\4\8\p\m\k\r\8\r\x\r\a\p\r\p\x\h\4\8\6\3\9\i\y\n\h\e\5\3\v\1\1\8\b\o\9\i\y\3\9\t\m\k\w\j\s\0\i\v\s\d\n\0\y\0\c\j\c\j\w\w\y\u\7\1\l\r\1\q\7\s\b\s\j\h\f\a\t\x\3\l\8\z\1\z\b\k\0\3\t\1\q\2\j\j\1\0\z\o\0\v\0\a\i\f\q\q\z\s\0\z\f\u\g\d\z\9\0\e\g\u\5\9\q\w\7\3\i\r\j\d\u\m\u\h\1\p\d\a\o\y\m\h\y\x\3\n\y\m\l\w\5\2\a\9\a\u\6\o\g\7\k\4\m\w\1\a\m\5\g\z\a\y\x\e\m\5\8\6\d\w\a\a\k\3\n\6\h\n\2\e\8\9\5\b\i\t\g\5\d\m\f\7\n\i\1\h\d\o\p\5\y\7\w\k\o\7\5\d\g\r\m\v\s\w\1\9\n\9\z\g\y\8\f\w\y\d\r\w\5\q\v\2\h\l\d\m\9\y\v\a\3\p\d\m\e\t\h\k\e\x\u\0\o\e\l\m\w\4\s\r\6\e\4\e\c\e\j\p\i\z\g\9\b\o\x\7\m\p\k\b\e\3\b\p\k\h\y\6\5\3\t\u\0\h\j\h\h\c\0\3\b\z\9\m\s\y\a\g\z\t\6 ]] 00:06:34.690 00:06:34.690 real 0m1.817s 00:06:34.690 user 0m1.062s 00:06:34.690 sys 0m0.574s 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:34.690 ************************************ 00:06:34.690 END TEST dd_flag_nofollow 00:06:34.690 ************************************ 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.690 ************************************ 00:06:34.690 START TEST dd_flag_noatime 00:06:34.690 ************************************ 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721072668 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721072668 00:06:34.690 19:44:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:35.666 19:44:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.666 [2024-07-15 19:44:29.872371] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:35.666 [2024-07-15 19:44:29.872479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63218 ] 00:06:35.925 [2024-07-15 19:44:30.011399] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.925 [2024-07-15 19:44:30.117181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.184 [2024-07-15 19:44:30.174632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.184  Copying: 512/512 [B] (average 500 kBps) 00:06:36.184 00:06:36.184 19:44:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.184 19:44:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721072668 )) 00:06:36.184 19:44:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.444 19:44:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721072668 )) 00:06:36.444 19:44:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.444 [2024-07-15 19:44:30.480532] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:36.444 [2024-07-15 19:44:30.480624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63231 ] 00:06:36.444 [2024-07-15 19:44:30.618147] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.703 [2024-07-15 19:44:30.721419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.703 [2024-07-15 19:44:30.776845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.961  Copying: 512/512 [B] (average 500 kBps) 00:06:36.961 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721072670 )) 00:06:36.961 00:06:36.961 real 0m2.247s 00:06:36.961 user 0m0.700s 00:06:36.961 sys 0m0.585s 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.961 ************************************ 00:06:36.961 END TEST dd_flag_noatime 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:36.961 ************************************ 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:36.961 ************************************ 00:06:36.961 START TEST dd_flags_misc 00:06:36.961 ************************************ 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:36.961 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:36.961 [2024-07-15 19:44:31.141926] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:36.961 [2024-07-15 19:44:31.142024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63260 ] 00:06:37.220 [2024-07-15 19:44:31.278850] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.220 [2024-07-15 19:44:31.378042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.220 [2024-07-15 19:44:31.432961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.479  Copying: 512/512 [B] (average 500 kBps) 00:06:37.479 00:06:37.479 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ w3gyiupn87oloa8a0fq4em2ckogr75zwqzxi3ycisqzn4yj67kcebhadjtbp8bu54dgmsururr3cuxk8hfdt7uckmec6t5ondm88t76o718bigj5rt6k06lsmztz6ilazoedq422fwvebtjgl60b3q1opzuc6fmjhq6eem265xzs7203bw89q0qedwj60bmpa0epdabd8qqtaqt3qarfipvcvf6bx2ex7hjl7hubt1tj343ov81xlf1vuibtkuismh4ic8myo7kt70uwc2bn5ksw7hyxpsovjcb09fvexi2ng9d73uiedklypdksbl4niue3w69tq9eioa5761kn4xjfehz89wrsbdck1y2et0v03pxeykvipwx01j2k8l8866oauoqf4k4zmhein5sgyaurltdx9wzen2uyy0etcqw6dphulbp9dzdffl83wdrb198q9r2mj1q9o66vmqrr4ca5taw1psmne5k5cr0h4f3o6ieo6j8x9e909j7v3w6u == \w\3\g\y\i\u\p\n\8\7\o\l\o\a\8\a\0\f\q\4\e\m\2\c\k\o\g\r\7\5\z\w\q\z\x\i\3\y\c\i\s\q\z\n\4\y\j\6\7\k\c\e\b\h\a\d\j\t\b\p\8\b\u\5\4\d\g\m\s\u\r\u\r\r\3\c\u\x\k\8\h\f\d\t\7\u\c\k\m\e\c\6\t\5\o\n\d\m\8\8\t\7\6\o\7\1\8\b\i\g\j\5\r\t\6\k\0\6\l\s\m\z\t\z\6\i\l\a\z\o\e\d\q\4\2\2\f\w\v\e\b\t\j\g\l\6\0\b\3\q\1\o\p\z\u\c\6\f\m\j\h\q\6\e\e\m\2\6\5\x\z\s\7\2\0\3\b\w\8\9\q\0\q\e\d\w\j\6\0\b\m\p\a\0\e\p\d\a\b\d\8\q\q\t\a\q\t\3\q\a\r\f\i\p\v\c\v\f\6\b\x\2\e\x\7\h\j\l\7\h\u\b\t\1\t\j\3\4\3\o\v\8\1\x\l\f\1\v\u\i\b\t\k\u\i\s\m\h\4\i\c\8\m\y\o\7\k\t\7\0\u\w\c\2\b\n\5\k\s\w\7\h\y\x\p\s\o\v\j\c\b\0\9\f\v\e\x\i\2\n\g\9\d\7\3\u\i\e\d\k\l\y\p\d\k\s\b\l\4\n\i\u\e\3\w\6\9\t\q\9\e\i\o\a\5\7\6\1\k\n\4\x\j\f\e\h\z\8\9\w\r\s\b\d\c\k\1\y\2\e\t\0\v\0\3\p\x\e\y\k\v\i\p\w\x\0\1\j\2\k\8\l\8\8\6\6\o\a\u\o\q\f\4\k\4\z\m\h\e\i\n\5\s\g\y\a\u\r\l\t\d\x\9\w\z\e\n\2\u\y\y\0\e\t\c\q\w\6\d\p\h\u\l\b\p\9\d\z\d\f\f\l\8\3\w\d\r\b\1\9\8\q\9\r\2\m\j\1\q\9\o\6\6\v\m\q\r\r\4\c\a\5\t\a\w\1\p\s\m\n\e\5\k\5\c\r\0\h\4\f\3\o\6\i\e\o\6\j\8\x\9\e\9\0\9\j\7\v\3\w\6\u ]] 00:06:37.479 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:37.479 19:44:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:37.736 [2024-07-15 19:44:31.734135] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:37.736 [2024-07-15 19:44:31.734243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:06:37.736 [2024-07-15 19:44:31.873510] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.994 [2024-07-15 19:44:31.992238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.994 [2024-07-15 19:44:32.049072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.252  Copying: 512/512 [B] (average 500 kBps) 00:06:38.252 00:06:38.252 19:44:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ w3gyiupn87oloa8a0fq4em2ckogr75zwqzxi3ycisqzn4yj67kcebhadjtbp8bu54dgmsururr3cuxk8hfdt7uckmec6t5ondm88t76o718bigj5rt6k06lsmztz6ilazoedq422fwvebtjgl60b3q1opzuc6fmjhq6eem265xzs7203bw89q0qedwj60bmpa0epdabd8qqtaqt3qarfipvcvf6bx2ex7hjl7hubt1tj343ov81xlf1vuibtkuismh4ic8myo7kt70uwc2bn5ksw7hyxpsovjcb09fvexi2ng9d73uiedklypdksbl4niue3w69tq9eioa5761kn4xjfehz89wrsbdck1y2et0v03pxeykvipwx01j2k8l8866oauoqf4k4zmhein5sgyaurltdx9wzen2uyy0etcqw6dphulbp9dzdffl83wdrb198q9r2mj1q9o66vmqrr4ca5taw1psmne5k5cr0h4f3o6ieo6j8x9e909j7v3w6u == \w\3\g\y\i\u\p\n\8\7\o\l\o\a\8\a\0\f\q\4\e\m\2\c\k\o\g\r\7\5\z\w\q\z\x\i\3\y\c\i\s\q\z\n\4\y\j\6\7\k\c\e\b\h\a\d\j\t\b\p\8\b\u\5\4\d\g\m\s\u\r\u\r\r\3\c\u\x\k\8\h\f\d\t\7\u\c\k\m\e\c\6\t\5\o\n\d\m\8\8\t\7\6\o\7\1\8\b\i\g\j\5\r\t\6\k\0\6\l\s\m\z\t\z\6\i\l\a\z\o\e\d\q\4\2\2\f\w\v\e\b\t\j\g\l\6\0\b\3\q\1\o\p\z\u\c\6\f\m\j\h\q\6\e\e\m\2\6\5\x\z\s\7\2\0\3\b\w\8\9\q\0\q\e\d\w\j\6\0\b\m\p\a\0\e\p\d\a\b\d\8\q\q\t\a\q\t\3\q\a\r\f\i\p\v\c\v\f\6\b\x\2\e\x\7\h\j\l\7\h\u\b\t\1\t\j\3\4\3\o\v\8\1\x\l\f\1\v\u\i\b\t\k\u\i\s\m\h\4\i\c\8\m\y\o\7\k\t\7\0\u\w\c\2\b\n\5\k\s\w\7\h\y\x\p\s\o\v\j\c\b\0\9\f\v\e\x\i\2\n\g\9\d\7\3\u\i\e\d\k\l\y\p\d\k\s\b\l\4\n\i\u\e\3\w\6\9\t\q\9\e\i\o\a\5\7\6\1\k\n\4\x\j\f\e\h\z\8\9\w\r\s\b\d\c\k\1\y\2\e\t\0\v\0\3\p\x\e\y\k\v\i\p\w\x\0\1\j\2\k\8\l\8\8\6\6\o\a\u\o\q\f\4\k\4\z\m\h\e\i\n\5\s\g\y\a\u\r\l\t\d\x\9\w\z\e\n\2\u\y\y\0\e\t\c\q\w\6\d\p\h\u\l\b\p\9\d\z\d\f\f\l\8\3\w\d\r\b\1\9\8\q\9\r\2\m\j\1\q\9\o\6\6\v\m\q\r\r\4\c\a\5\t\a\w\1\p\s\m\n\e\5\k\5\c\r\0\h\4\f\3\o\6\i\e\o\6\j\8\x\9\e\9\0\9\j\7\v\3\w\6\u ]] 00:06:38.252 19:44:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.252 19:44:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:38.252 [2024-07-15 19:44:32.344563] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:38.252 [2024-07-15 19:44:32.344671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63283 ] 00:06:38.252 [2024-07-15 19:44:32.474680] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.512 [2024-07-15 19:44:32.585749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.512 [2024-07-15 19:44:32.639203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.770  Copying: 512/512 [B] (average 125 kBps) 00:06:38.770 00:06:38.770 19:44:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ w3gyiupn87oloa8a0fq4em2ckogr75zwqzxi3ycisqzn4yj67kcebhadjtbp8bu54dgmsururr3cuxk8hfdt7uckmec6t5ondm88t76o718bigj5rt6k06lsmztz6ilazoedq422fwvebtjgl60b3q1opzuc6fmjhq6eem265xzs7203bw89q0qedwj60bmpa0epdabd8qqtaqt3qarfipvcvf6bx2ex7hjl7hubt1tj343ov81xlf1vuibtkuismh4ic8myo7kt70uwc2bn5ksw7hyxpsovjcb09fvexi2ng9d73uiedklypdksbl4niue3w69tq9eioa5761kn4xjfehz89wrsbdck1y2et0v03pxeykvipwx01j2k8l8866oauoqf4k4zmhein5sgyaurltdx9wzen2uyy0etcqw6dphulbp9dzdffl83wdrb198q9r2mj1q9o66vmqrr4ca5taw1psmne5k5cr0h4f3o6ieo6j8x9e909j7v3w6u == \w\3\g\y\i\u\p\n\8\7\o\l\o\a\8\a\0\f\q\4\e\m\2\c\k\o\g\r\7\5\z\w\q\z\x\i\3\y\c\i\s\q\z\n\4\y\j\6\7\k\c\e\b\h\a\d\j\t\b\p\8\b\u\5\4\d\g\m\s\u\r\u\r\r\3\c\u\x\k\8\h\f\d\t\7\u\c\k\m\e\c\6\t\5\o\n\d\m\8\8\t\7\6\o\7\1\8\b\i\g\j\5\r\t\6\k\0\6\l\s\m\z\t\z\6\i\l\a\z\o\e\d\q\4\2\2\f\w\v\e\b\t\j\g\l\6\0\b\3\q\1\o\p\z\u\c\6\f\m\j\h\q\6\e\e\m\2\6\5\x\z\s\7\2\0\3\b\w\8\9\q\0\q\e\d\w\j\6\0\b\m\p\a\0\e\p\d\a\b\d\8\q\q\t\a\q\t\3\q\a\r\f\i\p\v\c\v\f\6\b\x\2\e\x\7\h\j\l\7\h\u\b\t\1\t\j\3\4\3\o\v\8\1\x\l\f\1\v\u\i\b\t\k\u\i\s\m\h\4\i\c\8\m\y\o\7\k\t\7\0\u\w\c\2\b\n\5\k\s\w\7\h\y\x\p\s\o\v\j\c\b\0\9\f\v\e\x\i\2\n\g\9\d\7\3\u\i\e\d\k\l\y\p\d\k\s\b\l\4\n\i\u\e\3\w\6\9\t\q\9\e\i\o\a\5\7\6\1\k\n\4\x\j\f\e\h\z\8\9\w\r\s\b\d\c\k\1\y\2\e\t\0\v\0\3\p\x\e\y\k\v\i\p\w\x\0\1\j\2\k\8\l\8\8\6\6\o\a\u\o\q\f\4\k\4\z\m\h\e\i\n\5\s\g\y\a\u\r\l\t\d\x\9\w\z\e\n\2\u\y\y\0\e\t\c\q\w\6\d\p\h\u\l\b\p\9\d\z\d\f\f\l\8\3\w\d\r\b\1\9\8\q\9\r\2\m\j\1\q\9\o\6\6\v\m\q\r\r\4\c\a\5\t\a\w\1\p\s\m\n\e\5\k\5\c\r\0\h\4\f\3\o\6\i\e\o\6\j\8\x\9\e\9\0\9\j\7\v\3\w\6\u ]] 00:06:38.770 19:44:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.770 19:44:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:38.770 [2024-07-15 19:44:32.927923] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:38.770 [2024-07-15 19:44:32.928029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63294 ] 00:06:39.029 [2024-07-15 19:44:33.060733] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.029 [2024-07-15 19:44:33.166863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.029 [2024-07-15 19:44:33.220236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.287  Copying: 512/512 [B] (average 500 kBps) 00:06:39.287 00:06:39.287 19:44:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ w3gyiupn87oloa8a0fq4em2ckogr75zwqzxi3ycisqzn4yj67kcebhadjtbp8bu54dgmsururr3cuxk8hfdt7uckmec6t5ondm88t76o718bigj5rt6k06lsmztz6ilazoedq422fwvebtjgl60b3q1opzuc6fmjhq6eem265xzs7203bw89q0qedwj60bmpa0epdabd8qqtaqt3qarfipvcvf6bx2ex7hjl7hubt1tj343ov81xlf1vuibtkuismh4ic8myo7kt70uwc2bn5ksw7hyxpsovjcb09fvexi2ng9d73uiedklypdksbl4niue3w69tq9eioa5761kn4xjfehz89wrsbdck1y2et0v03pxeykvipwx01j2k8l8866oauoqf4k4zmhein5sgyaurltdx9wzen2uyy0etcqw6dphulbp9dzdffl83wdrb198q9r2mj1q9o66vmqrr4ca5taw1psmne5k5cr0h4f3o6ieo6j8x9e909j7v3w6u == \w\3\g\y\i\u\p\n\8\7\o\l\o\a\8\a\0\f\q\4\e\m\2\c\k\o\g\r\7\5\z\w\q\z\x\i\3\y\c\i\s\q\z\n\4\y\j\6\7\k\c\e\b\h\a\d\j\t\b\p\8\b\u\5\4\d\g\m\s\u\r\u\r\r\3\c\u\x\k\8\h\f\d\t\7\u\c\k\m\e\c\6\t\5\o\n\d\m\8\8\t\7\6\o\7\1\8\b\i\g\j\5\r\t\6\k\0\6\l\s\m\z\t\z\6\i\l\a\z\o\e\d\q\4\2\2\f\w\v\e\b\t\j\g\l\6\0\b\3\q\1\o\p\z\u\c\6\f\m\j\h\q\6\e\e\m\2\6\5\x\z\s\7\2\0\3\b\w\8\9\q\0\q\e\d\w\j\6\0\b\m\p\a\0\e\p\d\a\b\d\8\q\q\t\a\q\t\3\q\a\r\f\i\p\v\c\v\f\6\b\x\2\e\x\7\h\j\l\7\h\u\b\t\1\t\j\3\4\3\o\v\8\1\x\l\f\1\v\u\i\b\t\k\u\i\s\m\h\4\i\c\8\m\y\o\7\k\t\7\0\u\w\c\2\b\n\5\k\s\w\7\h\y\x\p\s\o\v\j\c\b\0\9\f\v\e\x\i\2\n\g\9\d\7\3\u\i\e\d\k\l\y\p\d\k\s\b\l\4\n\i\u\e\3\w\6\9\t\q\9\e\i\o\a\5\7\6\1\k\n\4\x\j\f\e\h\z\8\9\w\r\s\b\d\c\k\1\y\2\e\t\0\v\0\3\p\x\e\y\k\v\i\p\w\x\0\1\j\2\k\8\l\8\8\6\6\o\a\u\o\q\f\4\k\4\z\m\h\e\i\n\5\s\g\y\a\u\r\l\t\d\x\9\w\z\e\n\2\u\y\y\0\e\t\c\q\w\6\d\p\h\u\l\b\p\9\d\z\d\f\f\l\8\3\w\d\r\b\1\9\8\q\9\r\2\m\j\1\q\9\o\6\6\v\m\q\r\r\4\c\a\5\t\a\w\1\p\s\m\n\e\5\k\5\c\r\0\h\4\f\3\o\6\i\e\o\6\j\8\x\9\e\9\0\9\j\7\v\3\w\6\u ]] 00:06:39.287 19:44:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:39.287 19:44:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:39.287 19:44:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:39.287 19:44:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:39.287 19:44:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.287 19:44:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:39.287 [2024-07-15 19:44:33.523854] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:39.287 [2024-07-15 19:44:33.523966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63303 ] 00:06:39.545 [2024-07-15 19:44:33.654730] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.545 [2024-07-15 19:44:33.760879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.851 [2024-07-15 19:44:33.816472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.155  Copying: 512/512 [B] (average 500 kBps) 00:06:40.155 00:06:40.155 19:44:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qgogisw81b909ejo1r5brhe8e4mxe5bpld8gdzveox8xssnegz48z0vtw6b4be2rcpiouw8toatetvyod36vou5qw7m1kogltsl0tb4vqce597k1y6cdpcdvc0j8kwowkcneuvpyfybv79r89k7u5etphpsmpyqkox7t8t524rbyb69ii3azxfyoxs6k8bc4j0yrtt2nchvwbwkpxfd4m0poxgr0o5j7ccyciqvx4l9bcul6lo5zr19cpnpbbhcm2rxlku5xrks6k8thn5w501p9ed2qj4g791mp73xubmxxtro0uxexdxbtd7g7covndc8bz5vjyi1xct8zvn8mg4pe6sh6mh9co9zdxeh0uzt6jl0avjbrbetef2f5cwmjjbzx1y6sufugb63iwhwqpy6fw0m43vmolwgys4l0s9rzzijbxd6bqvbjv8lrs743q6mwl42zpupgmvjskiv3x3ys09k8rww6u1h97xgl2sunadnj4f915wve79n6dqig == \q\g\o\g\i\s\w\8\1\b\9\0\9\e\j\o\1\r\5\b\r\h\e\8\e\4\m\x\e\5\b\p\l\d\8\g\d\z\v\e\o\x\8\x\s\s\n\e\g\z\4\8\z\0\v\t\w\6\b\4\b\e\2\r\c\p\i\o\u\w\8\t\o\a\t\e\t\v\y\o\d\3\6\v\o\u\5\q\w\7\m\1\k\o\g\l\t\s\l\0\t\b\4\v\q\c\e\5\9\7\k\1\y\6\c\d\p\c\d\v\c\0\j\8\k\w\o\w\k\c\n\e\u\v\p\y\f\y\b\v\7\9\r\8\9\k\7\u\5\e\t\p\h\p\s\m\p\y\q\k\o\x\7\t\8\t\5\2\4\r\b\y\b\6\9\i\i\3\a\z\x\f\y\o\x\s\6\k\8\b\c\4\j\0\y\r\t\t\2\n\c\h\v\w\b\w\k\p\x\f\d\4\m\0\p\o\x\g\r\0\o\5\j\7\c\c\y\c\i\q\v\x\4\l\9\b\c\u\l\6\l\o\5\z\r\1\9\c\p\n\p\b\b\h\c\m\2\r\x\l\k\u\5\x\r\k\s\6\k\8\t\h\n\5\w\5\0\1\p\9\e\d\2\q\j\4\g\7\9\1\m\p\7\3\x\u\b\m\x\x\t\r\o\0\u\x\e\x\d\x\b\t\d\7\g\7\c\o\v\n\d\c\8\b\z\5\v\j\y\i\1\x\c\t\8\z\v\n\8\m\g\4\p\e\6\s\h\6\m\h\9\c\o\9\z\d\x\e\h\0\u\z\t\6\j\l\0\a\v\j\b\r\b\e\t\e\f\2\f\5\c\w\m\j\j\b\z\x\1\y\6\s\u\f\u\g\b\6\3\i\w\h\w\q\p\y\6\f\w\0\m\4\3\v\m\o\l\w\g\y\s\4\l\0\s\9\r\z\z\i\j\b\x\d\6\b\q\v\b\j\v\8\l\r\s\7\4\3\q\6\m\w\l\4\2\z\p\u\p\g\m\v\j\s\k\i\v\3\x\3\y\s\0\9\k\8\r\w\w\6\u\1\h\9\7\x\g\l\2\s\u\n\a\d\n\j\4\f\9\1\5\w\v\e\7\9\n\6\d\q\i\g ]] 00:06:40.155 19:44:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.155 19:44:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:40.155 [2024-07-15 19:44:34.128671] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:40.156 [2024-07-15 19:44:34.128780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63313 ] 00:06:40.156 [2024-07-15 19:44:34.267059] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.156 [2024-07-15 19:44:34.389913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.414 [2024-07-15 19:44:34.449433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.673  Copying: 512/512 [B] (average 500 kBps) 00:06:40.673 00:06:40.673 19:44:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qgogisw81b909ejo1r5brhe8e4mxe5bpld8gdzveox8xssnegz48z0vtw6b4be2rcpiouw8toatetvyod36vou5qw7m1kogltsl0tb4vqce597k1y6cdpcdvc0j8kwowkcneuvpyfybv79r89k7u5etphpsmpyqkox7t8t524rbyb69ii3azxfyoxs6k8bc4j0yrtt2nchvwbwkpxfd4m0poxgr0o5j7ccyciqvx4l9bcul6lo5zr19cpnpbbhcm2rxlku5xrks6k8thn5w501p9ed2qj4g791mp73xubmxxtro0uxexdxbtd7g7covndc8bz5vjyi1xct8zvn8mg4pe6sh6mh9co9zdxeh0uzt6jl0avjbrbetef2f5cwmjjbzx1y6sufugb63iwhwqpy6fw0m43vmolwgys4l0s9rzzijbxd6bqvbjv8lrs743q6mwl42zpupgmvjskiv3x3ys09k8rww6u1h97xgl2sunadnj4f915wve79n6dqig == \q\g\o\g\i\s\w\8\1\b\9\0\9\e\j\o\1\r\5\b\r\h\e\8\e\4\m\x\e\5\b\p\l\d\8\g\d\z\v\e\o\x\8\x\s\s\n\e\g\z\4\8\z\0\v\t\w\6\b\4\b\e\2\r\c\p\i\o\u\w\8\t\o\a\t\e\t\v\y\o\d\3\6\v\o\u\5\q\w\7\m\1\k\o\g\l\t\s\l\0\t\b\4\v\q\c\e\5\9\7\k\1\y\6\c\d\p\c\d\v\c\0\j\8\k\w\o\w\k\c\n\e\u\v\p\y\f\y\b\v\7\9\r\8\9\k\7\u\5\e\t\p\h\p\s\m\p\y\q\k\o\x\7\t\8\t\5\2\4\r\b\y\b\6\9\i\i\3\a\z\x\f\y\o\x\s\6\k\8\b\c\4\j\0\y\r\t\t\2\n\c\h\v\w\b\w\k\p\x\f\d\4\m\0\p\o\x\g\r\0\o\5\j\7\c\c\y\c\i\q\v\x\4\l\9\b\c\u\l\6\l\o\5\z\r\1\9\c\p\n\p\b\b\h\c\m\2\r\x\l\k\u\5\x\r\k\s\6\k\8\t\h\n\5\w\5\0\1\p\9\e\d\2\q\j\4\g\7\9\1\m\p\7\3\x\u\b\m\x\x\t\r\o\0\u\x\e\x\d\x\b\t\d\7\g\7\c\o\v\n\d\c\8\b\z\5\v\j\y\i\1\x\c\t\8\z\v\n\8\m\g\4\p\e\6\s\h\6\m\h\9\c\o\9\z\d\x\e\h\0\u\z\t\6\j\l\0\a\v\j\b\r\b\e\t\e\f\2\f\5\c\w\m\j\j\b\z\x\1\y\6\s\u\f\u\g\b\6\3\i\w\h\w\q\p\y\6\f\w\0\m\4\3\v\m\o\l\w\g\y\s\4\l\0\s\9\r\z\z\i\j\b\x\d\6\b\q\v\b\j\v\8\l\r\s\7\4\3\q\6\m\w\l\4\2\z\p\u\p\g\m\v\j\s\k\i\v\3\x\3\y\s\0\9\k\8\r\w\w\6\u\1\h\9\7\x\g\l\2\s\u\n\a\d\n\j\4\f\9\1\5\w\v\e\7\9\n\6\d\q\i\g ]] 00:06:40.673 19:44:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.673 19:44:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:40.673 [2024-07-15 19:44:34.754587] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:40.673 [2024-07-15 19:44:34.754698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63322 ] 00:06:40.673 [2024-07-15 19:44:34.892578] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.931 [2024-07-15 19:44:35.004879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.931 [2024-07-15 19:44:35.059841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.190  Copying: 512/512 [B] (average 250 kBps) 00:06:41.190 00:06:41.190 19:44:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qgogisw81b909ejo1r5brhe8e4mxe5bpld8gdzveox8xssnegz48z0vtw6b4be2rcpiouw8toatetvyod36vou5qw7m1kogltsl0tb4vqce597k1y6cdpcdvc0j8kwowkcneuvpyfybv79r89k7u5etphpsmpyqkox7t8t524rbyb69ii3azxfyoxs6k8bc4j0yrtt2nchvwbwkpxfd4m0poxgr0o5j7ccyciqvx4l9bcul6lo5zr19cpnpbbhcm2rxlku5xrks6k8thn5w501p9ed2qj4g791mp73xubmxxtro0uxexdxbtd7g7covndc8bz5vjyi1xct8zvn8mg4pe6sh6mh9co9zdxeh0uzt6jl0avjbrbetef2f5cwmjjbzx1y6sufugb63iwhwqpy6fw0m43vmolwgys4l0s9rzzijbxd6bqvbjv8lrs743q6mwl42zpupgmvjskiv3x3ys09k8rww6u1h97xgl2sunadnj4f915wve79n6dqig == \q\g\o\g\i\s\w\8\1\b\9\0\9\e\j\o\1\r\5\b\r\h\e\8\e\4\m\x\e\5\b\p\l\d\8\g\d\z\v\e\o\x\8\x\s\s\n\e\g\z\4\8\z\0\v\t\w\6\b\4\b\e\2\r\c\p\i\o\u\w\8\t\o\a\t\e\t\v\y\o\d\3\6\v\o\u\5\q\w\7\m\1\k\o\g\l\t\s\l\0\t\b\4\v\q\c\e\5\9\7\k\1\y\6\c\d\p\c\d\v\c\0\j\8\k\w\o\w\k\c\n\e\u\v\p\y\f\y\b\v\7\9\r\8\9\k\7\u\5\e\t\p\h\p\s\m\p\y\q\k\o\x\7\t\8\t\5\2\4\r\b\y\b\6\9\i\i\3\a\z\x\f\y\o\x\s\6\k\8\b\c\4\j\0\y\r\t\t\2\n\c\h\v\w\b\w\k\p\x\f\d\4\m\0\p\o\x\g\r\0\o\5\j\7\c\c\y\c\i\q\v\x\4\l\9\b\c\u\l\6\l\o\5\z\r\1\9\c\p\n\p\b\b\h\c\m\2\r\x\l\k\u\5\x\r\k\s\6\k\8\t\h\n\5\w\5\0\1\p\9\e\d\2\q\j\4\g\7\9\1\m\p\7\3\x\u\b\m\x\x\t\r\o\0\u\x\e\x\d\x\b\t\d\7\g\7\c\o\v\n\d\c\8\b\z\5\v\j\y\i\1\x\c\t\8\z\v\n\8\m\g\4\p\e\6\s\h\6\m\h\9\c\o\9\z\d\x\e\h\0\u\z\t\6\j\l\0\a\v\j\b\r\b\e\t\e\f\2\f\5\c\w\m\j\j\b\z\x\1\y\6\s\u\f\u\g\b\6\3\i\w\h\w\q\p\y\6\f\w\0\m\4\3\v\m\o\l\w\g\y\s\4\l\0\s\9\r\z\z\i\j\b\x\d\6\b\q\v\b\j\v\8\l\r\s\7\4\3\q\6\m\w\l\4\2\z\p\u\p\g\m\v\j\s\k\i\v\3\x\3\y\s\0\9\k\8\r\w\w\6\u\1\h\9\7\x\g\l\2\s\u\n\a\d\n\j\4\f\9\1\5\w\v\e\7\9\n\6\d\q\i\g ]] 00:06:41.190 19:44:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.190 19:44:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:41.190 [2024-07-15 19:44:35.368118] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:41.190 [2024-07-15 19:44:35.368250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63332 ] 00:06:41.448 [2024-07-15 19:44:35.511354] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.448 [2024-07-15 19:44:35.618253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.448 [2024-07-15 19:44:35.672782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.707  Copying: 512/512 [B] (average 166 kBps) 00:06:41.707 00:06:41.707 19:44:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qgogisw81b909ejo1r5brhe8e4mxe5bpld8gdzveox8xssnegz48z0vtw6b4be2rcpiouw8toatetvyod36vou5qw7m1kogltsl0tb4vqce597k1y6cdpcdvc0j8kwowkcneuvpyfybv79r89k7u5etphpsmpyqkox7t8t524rbyb69ii3azxfyoxs6k8bc4j0yrtt2nchvwbwkpxfd4m0poxgr0o5j7ccyciqvx4l9bcul6lo5zr19cpnpbbhcm2rxlku5xrks6k8thn5w501p9ed2qj4g791mp73xubmxxtro0uxexdxbtd7g7covndc8bz5vjyi1xct8zvn8mg4pe6sh6mh9co9zdxeh0uzt6jl0avjbrbetef2f5cwmjjbzx1y6sufugb63iwhwqpy6fw0m43vmolwgys4l0s9rzzijbxd6bqvbjv8lrs743q6mwl42zpupgmvjskiv3x3ys09k8rww6u1h97xgl2sunadnj4f915wve79n6dqig == \q\g\o\g\i\s\w\8\1\b\9\0\9\e\j\o\1\r\5\b\r\h\e\8\e\4\m\x\e\5\b\p\l\d\8\g\d\z\v\e\o\x\8\x\s\s\n\e\g\z\4\8\z\0\v\t\w\6\b\4\b\e\2\r\c\p\i\o\u\w\8\t\o\a\t\e\t\v\y\o\d\3\6\v\o\u\5\q\w\7\m\1\k\o\g\l\t\s\l\0\t\b\4\v\q\c\e\5\9\7\k\1\y\6\c\d\p\c\d\v\c\0\j\8\k\w\o\w\k\c\n\e\u\v\p\y\f\y\b\v\7\9\r\8\9\k\7\u\5\e\t\p\h\p\s\m\p\y\q\k\o\x\7\t\8\t\5\2\4\r\b\y\b\6\9\i\i\3\a\z\x\f\y\o\x\s\6\k\8\b\c\4\j\0\y\r\t\t\2\n\c\h\v\w\b\w\k\p\x\f\d\4\m\0\p\o\x\g\r\0\o\5\j\7\c\c\y\c\i\q\v\x\4\l\9\b\c\u\l\6\l\o\5\z\r\1\9\c\p\n\p\b\b\h\c\m\2\r\x\l\k\u\5\x\r\k\s\6\k\8\t\h\n\5\w\5\0\1\p\9\e\d\2\q\j\4\g\7\9\1\m\p\7\3\x\u\b\m\x\x\t\r\o\0\u\x\e\x\d\x\b\t\d\7\g\7\c\o\v\n\d\c\8\b\z\5\v\j\y\i\1\x\c\t\8\z\v\n\8\m\g\4\p\e\6\s\h\6\m\h\9\c\o\9\z\d\x\e\h\0\u\z\t\6\j\l\0\a\v\j\b\r\b\e\t\e\f\2\f\5\c\w\m\j\j\b\z\x\1\y\6\s\u\f\u\g\b\6\3\i\w\h\w\q\p\y\6\f\w\0\m\4\3\v\m\o\l\w\g\y\s\4\l\0\s\9\r\z\z\i\j\b\x\d\6\b\q\v\b\j\v\8\l\r\s\7\4\3\q\6\m\w\l\4\2\z\p\u\p\g\m\v\j\s\k\i\v\3\x\3\y\s\0\9\k\8\r\w\w\6\u\1\h\9\7\x\g\l\2\s\u\n\a\d\n\j\4\f\9\1\5\w\v\e\7\9\n\6\d\q\i\g ]] 00:06:41.707 00:06:41.707 real 0m4.838s 00:06:41.707 user 0m2.792s 00:06:41.707 sys 0m2.224s 00:06:41.707 19:44:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.707 19:44:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:41.707 ************************************ 00:06:41.707 END TEST dd_flags_misc 00:06:41.707 ************************************ 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:41.965 * Second test run, disabling liburing, forcing AIO 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:41.965 ************************************ 00:06:41.965 START TEST dd_flag_append_forced_aio 00:06:41.965 ************************************ 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=dlwqkzn1bvc2mac3s1z97zvjl6j10698 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=0u76nqhz4wyto0cap90wg4txymod90wp 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s dlwqkzn1bvc2mac3s1z97zvjl6j10698 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 0u76nqhz4wyto0cap90wg4txymod90wp 00:06:41.965 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:41.965 [2024-07-15 19:44:36.030205] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:41.965 [2024-07-15 19:44:36.030309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63366 ] 00:06:41.965 [2024-07-15 19:44:36.162036] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.224 [2024-07-15 19:44:36.273128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.224 [2024-07-15 19:44:36.327989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.483  Copying: 32/32 [B] (average 31 kBps) 00:06:42.483 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 0u76nqhz4wyto0cap90wg4txymod90wpdlwqkzn1bvc2mac3s1z97zvjl6j10698 == \0\u\7\6\n\q\h\z\4\w\y\t\o\0\c\a\p\9\0\w\g\4\t\x\y\m\o\d\9\0\w\p\d\l\w\q\k\z\n\1\b\v\c\2\m\a\c\3\s\1\z\9\7\z\v\j\l\6\j\1\0\6\9\8 ]] 00:06:42.483 00:06:42.483 real 0m0.620s 00:06:42.483 user 0m0.357s 00:06:42.483 sys 0m0.144s 00:06:42.483 ************************************ 00:06:42.483 END TEST dd_flag_append_forced_aio 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.483 ************************************ 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:42.483 ************************************ 00:06:42.483 START TEST dd_flag_directory_forced_aio 00:06:42.483 ************************************ 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:42.483 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.483 [2024-07-15 19:44:36.701442] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:42.483 [2024-07-15 19:44:36.701554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63392 ] 00:06:42.742 [2024-07-15 19:44:36.839413] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.742 [2024-07-15 19:44:36.948048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.999 [2024-07-15 19:44:37.003486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.999 [2024-07-15 19:44:37.039440] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:42.999 [2024-07-15 19:44:37.039523] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:42.999 [2024-07-15 19:44:37.039537] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.999 [2024-07-15 19:44:37.159868] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.257 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:43.258 [2024-07-15 19:44:37.318020] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:43.258 [2024-07-15 19:44:37.318140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63402 ] 00:06:43.258 [2024-07-15 19:44:37.457824] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.515 [2024-07-15 19:44:37.567376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.515 [2024-07-15 19:44:37.622969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.515 [2024-07-15 19:44:37.654188] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:43.515 [2024-07-15 19:44:37.654237] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:43.515 [2024-07-15 19:44:37.654251] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.774 [2024-07-15 19:44:37.766803] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.774 00:06:43.774 real 0m1.216s 00:06:43.774 user 0m0.721s 00:06:43.774 sys 0m0.284s 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:43.774 ************************************ 00:06:43.774 END TEST dd_flag_directory_forced_aio 00:06:43.774 ************************************ 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.774 19:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:43.775 ************************************ 00:06:43.775 START TEST dd_flag_nofollow_forced_aio 00:06:43.775 ************************************ 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.775 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.775 [2024-07-15 19:44:37.971749] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:43.775 [2024-07-15 19:44:37.971841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63436 ] 00:06:44.033 [2024-07-15 19:44:38.112778] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.033 [2024-07-15 19:44:38.226738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.291 [2024-07-15 19:44:38.283676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.291 [2024-07-15 19:44:38.318459] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:44.291 [2024-07-15 19:44:38.318527] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:44.291 [2024-07-15 19:44:38.318559] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.291 [2024-07-15 19:44:38.433063] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.291 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:44.572 [2024-07-15 19:44:38.586020] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:44.572 [2024-07-15 19:44:38.586145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63440 ] 00:06:44.572 [2024-07-15 19:44:38.725423] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.830 [2024-07-15 19:44:38.843053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.830 [2024-07-15 19:44:38.899019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.830 [2024-07-15 19:44:38.935205] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:44.830 [2024-07-15 19:44:38.935260] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:44.830 [2024-07-15 19:44:38.935311] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.830 [2024-07-15 19:44:39.050754] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.089 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.089 [2024-07-15 19:44:39.199369] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:45.089 [2024-07-15 19:44:39.199462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63453 ] 00:06:45.089 [2024-07-15 19:44:39.330868] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.348 [2024-07-15 19:44:39.447012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.348 [2024-07-15 19:44:39.500912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.606  Copying: 512/512 [B] (average 500 kBps) 00:06:45.606 00:06:45.606 ************************************ 00:06:45.606 END TEST dd_flag_nofollow_forced_aio 00:06:45.606 ************************************ 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ d8bmzzw6frjgxa70yeidyoy9g8el63q0g8sf28rgxvxuir9hq7r7clochf434nxxhyv30tzinhojx3jf3pfxc6yxp5fje53x355myn27dgggycx5vmosvkr3twxdhebcm6bzamhdfxm0fnx9r1ft1dh6uys90iont33dpor88suu1wxeohl7xlguelq9bk781i92qqddr32qopbnts2mfabp3st5hznp5823xp3eodrnvnppjctdi9rt10qwfm05x6t9rwmgaapw8wa0olt3x2sn8mlnws7y1mh28457ro5f8daq7pyocz4tmx8o1ydauzqykobzxifixpyfroifhl7dr5xn41sc8n2dhxqx6345ax0mjbfvvcv22gu07aarepg65x3dvh3kf8csqvj4t7arzc19ii20sfc693qik2saebxf4jfjjk5dxf0tlu813bplidtyqhnfpr5e9k0ud49mn9xyz9x748f2vey99h2qug800t84vochdohdd2o9 == \d\8\b\m\z\z\w\6\f\r\j\g\x\a\7\0\y\e\i\d\y\o\y\9\g\8\e\l\6\3\q\0\g\8\s\f\2\8\r\g\x\v\x\u\i\r\9\h\q\7\r\7\c\l\o\c\h\f\4\3\4\n\x\x\h\y\v\3\0\t\z\i\n\h\o\j\x\3\j\f\3\p\f\x\c\6\y\x\p\5\f\j\e\5\3\x\3\5\5\m\y\n\2\7\d\g\g\g\y\c\x\5\v\m\o\s\v\k\r\3\t\w\x\d\h\e\b\c\m\6\b\z\a\m\h\d\f\x\m\0\f\n\x\9\r\1\f\t\1\d\h\6\u\y\s\9\0\i\o\n\t\3\3\d\p\o\r\8\8\s\u\u\1\w\x\e\o\h\l\7\x\l\g\u\e\l\q\9\b\k\7\8\1\i\9\2\q\q\d\d\r\3\2\q\o\p\b\n\t\s\2\m\f\a\b\p\3\s\t\5\h\z\n\p\5\8\2\3\x\p\3\e\o\d\r\n\v\n\p\p\j\c\t\d\i\9\r\t\1\0\q\w\f\m\0\5\x\6\t\9\r\w\m\g\a\a\p\w\8\w\a\0\o\l\t\3\x\2\s\n\8\m\l\n\w\s\7\y\1\m\h\2\8\4\5\7\r\o\5\f\8\d\a\q\7\p\y\o\c\z\4\t\m\x\8\o\1\y\d\a\u\z\q\y\k\o\b\z\x\i\f\i\x\p\y\f\r\o\i\f\h\l\7\d\r\5\x\n\4\1\s\c\8\n\2\d\h\x\q\x\6\3\4\5\a\x\0\m\j\b\f\v\v\c\v\2\2\g\u\0\7\a\a\r\e\p\g\6\5\x\3\d\v\h\3\k\f\8\c\s\q\v\j\4\t\7\a\r\z\c\1\9\i\i\2\0\s\f\c\6\9\3\q\i\k\2\s\a\e\b\x\f\4\j\f\j\j\k\5\d\x\f\0\t\l\u\8\1\3\b\p\l\i\d\t\y\q\h\n\f\p\r\5\e\9\k\0\u\d\4\9\m\n\9\x\y\z\9\x\7\4\8\f\2\v\e\y\9\9\h\2\q\u\g\8\0\0\t\8\4\v\o\c\h\d\o\h\d\d\2\o\9 ]] 00:06:45.606 00:06:45.606 real 0m1.880s 00:06:45.606 user 0m1.095s 00:06:45.606 sys 0m0.448s 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.606 ************************************ 00:06:45.606 START TEST dd_flag_noatime_forced_aio 00:06:45.606 ************************************ 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:45.606 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.865 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.865 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721072679 00:06:45.865 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.865 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721072679 00:06:45.865 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:46.800 19:44:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.800 [2024-07-15 19:44:40.909977] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:46.800 [2024-07-15 19:44:40.910089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63493 ] 00:06:46.800 [2024-07-15 19:44:41.043482] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.059 [2024-07-15 19:44:41.144989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.059 [2024-07-15 19:44:41.198789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.318  Copying: 512/512 [B] (average 500 kBps) 00:06:47.318 00:06:47.318 19:44:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.318 19:44:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721072679 )) 00:06:47.318 19:44:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.318 19:44:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721072679 )) 00:06:47.318 19:44:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.318 [2024-07-15 19:44:41.538702] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:47.318 [2024-07-15 19:44:41.538797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63505 ] 00:06:47.576 [2024-07-15 19:44:41.677623] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.576 [2024-07-15 19:44:41.788234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.834 [2024-07-15 19:44:41.843311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.092  Copying: 512/512 [B] (average 500 kBps) 00:06:48.092 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:48.092 ************************************ 00:06:48.092 END TEST dd_flag_noatime_forced_aio 00:06:48.092 ************************************ 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721072681 )) 00:06:48.092 00:06:48.092 real 0m2.283s 00:06:48.092 user 0m0.721s 00:06:48.092 sys 0m0.313s 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:48.092 ************************************ 00:06:48.092 START TEST dd_flags_misc_forced_aio 00:06:48.092 ************************************ 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:48.092 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.093 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:48.093 [2024-07-15 19:44:42.234429] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:48.093 [2024-07-15 19:44:42.234517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63537 ] 00:06:48.351 [2024-07-15 19:44:42.372833] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.351 [2024-07-15 19:44:42.482572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.351 [2024-07-15 19:44:42.535597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.610  Copying: 512/512 [B] (average 500 kBps) 00:06:48.610 00:06:48.610 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 894upjwblu85be4olwbuvgpqml0197ph7q8zdktetobf6ledtouo0eirtf3hq8t5ghb9wk3yumzl8u51li36b7idq6ry9hn4cqxbj34uy3a69yasnc1du4gzlm6ad2f5gndkngu1kyfux3mst294gygpagw8lmrntcvnsr2yetoszox6g5z21q7jl2jy87u7o6q67o3fj2vnaswq0qrom3vwi81vpmr4hwdkt03luowfzfv0bxghqmbnqgy6osg3f7fdq1zqyhaq7ca4e21tzslny0sj80g8x1mdpj48unj5a1rj3gy3lvn7mqbu1apwul2naar7v6r63002cz5t0ym9qkx7ydhe6nt7il59pzl7cn7uiugnnsvglwl7rltv8tb51l8lnet66nlanhfkxo58u2liedvdtjrjcriinjvpe07grhy2g35g43ohrhtftuoqni1xqg6qrv839nx7h0kupewfe35jgyujydh74cmqwevlsecz0bebd2akyqjb == \8\9\4\u\p\j\w\b\l\u\8\5\b\e\4\o\l\w\b\u\v\g\p\q\m\l\0\1\9\7\p\h\7\q\8\z\d\k\t\e\t\o\b\f\6\l\e\d\t\o\u\o\0\e\i\r\t\f\3\h\q\8\t\5\g\h\b\9\w\k\3\y\u\m\z\l\8\u\5\1\l\i\3\6\b\7\i\d\q\6\r\y\9\h\n\4\c\q\x\b\j\3\4\u\y\3\a\6\9\y\a\s\n\c\1\d\u\4\g\z\l\m\6\a\d\2\f\5\g\n\d\k\n\g\u\1\k\y\f\u\x\3\m\s\t\2\9\4\g\y\g\p\a\g\w\8\l\m\r\n\t\c\v\n\s\r\2\y\e\t\o\s\z\o\x\6\g\5\z\2\1\q\7\j\l\2\j\y\8\7\u\7\o\6\q\6\7\o\3\f\j\2\v\n\a\s\w\q\0\q\r\o\m\3\v\w\i\8\1\v\p\m\r\4\h\w\d\k\t\0\3\l\u\o\w\f\z\f\v\0\b\x\g\h\q\m\b\n\q\g\y\6\o\s\g\3\f\7\f\d\q\1\z\q\y\h\a\q\7\c\a\4\e\2\1\t\z\s\l\n\y\0\s\j\8\0\g\8\x\1\m\d\p\j\4\8\u\n\j\5\a\1\r\j\3\g\y\3\l\v\n\7\m\q\b\u\1\a\p\w\u\l\2\n\a\a\r\7\v\6\r\6\3\0\0\2\c\z\5\t\0\y\m\9\q\k\x\7\y\d\h\e\6\n\t\7\i\l\5\9\p\z\l\7\c\n\7\u\i\u\g\n\n\s\v\g\l\w\l\7\r\l\t\v\8\t\b\5\1\l\8\l\n\e\t\6\6\n\l\a\n\h\f\k\x\o\5\8\u\2\l\i\e\d\v\d\t\j\r\j\c\r\i\i\n\j\v\p\e\0\7\g\r\h\y\2\g\3\5\g\4\3\o\h\r\h\t\f\t\u\o\q\n\i\1\x\q\g\6\q\r\v\8\3\9\n\x\7\h\0\k\u\p\e\w\f\e\3\5\j\g\y\u\j\y\d\h\7\4\c\m\q\w\e\v\l\s\e\c\z\0\b\e\b\d\2\a\k\y\q\j\b ]] 00:06:48.610 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.610 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:48.870 [2024-07-15 19:44:42.866470] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:48.870 [2024-07-15 19:44:42.867228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63539 ] 00:06:48.870 [2024-07-15 19:44:43.023345] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.128 [2024-07-15 19:44:43.129287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.128 [2024-07-15 19:44:43.183752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.387  Copying: 512/512 [B] (average 500 kBps) 00:06:49.387 00:06:49.387 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 894upjwblu85be4olwbuvgpqml0197ph7q8zdktetobf6ledtouo0eirtf3hq8t5ghb9wk3yumzl8u51li36b7idq6ry9hn4cqxbj34uy3a69yasnc1du4gzlm6ad2f5gndkngu1kyfux3mst294gygpagw8lmrntcvnsr2yetoszox6g5z21q7jl2jy87u7o6q67o3fj2vnaswq0qrom3vwi81vpmr4hwdkt03luowfzfv0bxghqmbnqgy6osg3f7fdq1zqyhaq7ca4e21tzslny0sj80g8x1mdpj48unj5a1rj3gy3lvn7mqbu1apwul2naar7v6r63002cz5t0ym9qkx7ydhe6nt7il59pzl7cn7uiugnnsvglwl7rltv8tb51l8lnet66nlanhfkxo58u2liedvdtjrjcriinjvpe07grhy2g35g43ohrhtftuoqni1xqg6qrv839nx7h0kupewfe35jgyujydh74cmqwevlsecz0bebd2akyqjb == \8\9\4\u\p\j\w\b\l\u\8\5\b\e\4\o\l\w\b\u\v\g\p\q\m\l\0\1\9\7\p\h\7\q\8\z\d\k\t\e\t\o\b\f\6\l\e\d\t\o\u\o\0\e\i\r\t\f\3\h\q\8\t\5\g\h\b\9\w\k\3\y\u\m\z\l\8\u\5\1\l\i\3\6\b\7\i\d\q\6\r\y\9\h\n\4\c\q\x\b\j\3\4\u\y\3\a\6\9\y\a\s\n\c\1\d\u\4\g\z\l\m\6\a\d\2\f\5\g\n\d\k\n\g\u\1\k\y\f\u\x\3\m\s\t\2\9\4\g\y\g\p\a\g\w\8\l\m\r\n\t\c\v\n\s\r\2\y\e\t\o\s\z\o\x\6\g\5\z\2\1\q\7\j\l\2\j\y\8\7\u\7\o\6\q\6\7\o\3\f\j\2\v\n\a\s\w\q\0\q\r\o\m\3\v\w\i\8\1\v\p\m\r\4\h\w\d\k\t\0\3\l\u\o\w\f\z\f\v\0\b\x\g\h\q\m\b\n\q\g\y\6\o\s\g\3\f\7\f\d\q\1\z\q\y\h\a\q\7\c\a\4\e\2\1\t\z\s\l\n\y\0\s\j\8\0\g\8\x\1\m\d\p\j\4\8\u\n\j\5\a\1\r\j\3\g\y\3\l\v\n\7\m\q\b\u\1\a\p\w\u\l\2\n\a\a\r\7\v\6\r\6\3\0\0\2\c\z\5\t\0\y\m\9\q\k\x\7\y\d\h\e\6\n\t\7\i\l\5\9\p\z\l\7\c\n\7\u\i\u\g\n\n\s\v\g\l\w\l\7\r\l\t\v\8\t\b\5\1\l\8\l\n\e\t\6\6\n\l\a\n\h\f\k\x\o\5\8\u\2\l\i\e\d\v\d\t\j\r\j\c\r\i\i\n\j\v\p\e\0\7\g\r\h\y\2\g\3\5\g\4\3\o\h\r\h\t\f\t\u\o\q\n\i\1\x\q\g\6\q\r\v\8\3\9\n\x\7\h\0\k\u\p\e\w\f\e\3\5\j\g\y\u\j\y\d\h\7\4\c\m\q\w\e\v\l\s\e\c\z\0\b\e\b\d\2\a\k\y\q\j\b ]] 00:06:49.387 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.387 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:49.387 [2024-07-15 19:44:43.543201] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:49.387 [2024-07-15 19:44:43.543317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63552 ] 00:06:49.645 [2024-07-15 19:44:43.685752] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.645 [2024-07-15 19:44:43.799654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.645 [2024-07-15 19:44:43.872144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.162  Copying: 512/512 [B] (average 166 kBps) 00:06:50.162 00:06:50.162 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 894upjwblu85be4olwbuvgpqml0197ph7q8zdktetobf6ledtouo0eirtf3hq8t5ghb9wk3yumzl8u51li36b7idq6ry9hn4cqxbj34uy3a69yasnc1du4gzlm6ad2f5gndkngu1kyfux3mst294gygpagw8lmrntcvnsr2yetoszox6g5z21q7jl2jy87u7o6q67o3fj2vnaswq0qrom3vwi81vpmr4hwdkt03luowfzfv0bxghqmbnqgy6osg3f7fdq1zqyhaq7ca4e21tzslny0sj80g8x1mdpj48unj5a1rj3gy3lvn7mqbu1apwul2naar7v6r63002cz5t0ym9qkx7ydhe6nt7il59pzl7cn7uiugnnsvglwl7rltv8tb51l8lnet66nlanhfkxo58u2liedvdtjrjcriinjvpe07grhy2g35g43ohrhtftuoqni1xqg6qrv839nx7h0kupewfe35jgyujydh74cmqwevlsecz0bebd2akyqjb == \8\9\4\u\p\j\w\b\l\u\8\5\b\e\4\o\l\w\b\u\v\g\p\q\m\l\0\1\9\7\p\h\7\q\8\z\d\k\t\e\t\o\b\f\6\l\e\d\t\o\u\o\0\e\i\r\t\f\3\h\q\8\t\5\g\h\b\9\w\k\3\y\u\m\z\l\8\u\5\1\l\i\3\6\b\7\i\d\q\6\r\y\9\h\n\4\c\q\x\b\j\3\4\u\y\3\a\6\9\y\a\s\n\c\1\d\u\4\g\z\l\m\6\a\d\2\f\5\g\n\d\k\n\g\u\1\k\y\f\u\x\3\m\s\t\2\9\4\g\y\g\p\a\g\w\8\l\m\r\n\t\c\v\n\s\r\2\y\e\t\o\s\z\o\x\6\g\5\z\2\1\q\7\j\l\2\j\y\8\7\u\7\o\6\q\6\7\o\3\f\j\2\v\n\a\s\w\q\0\q\r\o\m\3\v\w\i\8\1\v\p\m\r\4\h\w\d\k\t\0\3\l\u\o\w\f\z\f\v\0\b\x\g\h\q\m\b\n\q\g\y\6\o\s\g\3\f\7\f\d\q\1\z\q\y\h\a\q\7\c\a\4\e\2\1\t\z\s\l\n\y\0\s\j\8\0\g\8\x\1\m\d\p\j\4\8\u\n\j\5\a\1\r\j\3\g\y\3\l\v\n\7\m\q\b\u\1\a\p\w\u\l\2\n\a\a\r\7\v\6\r\6\3\0\0\2\c\z\5\t\0\y\m\9\q\k\x\7\y\d\h\e\6\n\t\7\i\l\5\9\p\z\l\7\c\n\7\u\i\u\g\n\n\s\v\g\l\w\l\7\r\l\t\v\8\t\b\5\1\l\8\l\n\e\t\6\6\n\l\a\n\h\f\k\x\o\5\8\u\2\l\i\e\d\v\d\t\j\r\j\c\r\i\i\n\j\v\p\e\0\7\g\r\h\y\2\g\3\5\g\4\3\o\h\r\h\t\f\t\u\o\q\n\i\1\x\q\g\6\q\r\v\8\3\9\n\x\7\h\0\k\u\p\e\w\f\e\3\5\j\g\y\u\j\y\d\h\7\4\c\m\q\w\e\v\l\s\e\c\z\0\b\e\b\d\2\a\k\y\q\j\b ]] 00:06:50.162 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.162 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:50.162 [2024-07-15 19:44:44.277768] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:50.162 [2024-07-15 19:44:44.277922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63565 ] 00:06:50.420 [2024-07-15 19:44:44.424225] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.420 [2024-07-15 19:44:44.552101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.420 [2024-07-15 19:44:44.608151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.679  Copying: 512/512 [B] (average 500 kBps) 00:06:50.679 00:06:50.679 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 894upjwblu85be4olwbuvgpqml0197ph7q8zdktetobf6ledtouo0eirtf3hq8t5ghb9wk3yumzl8u51li36b7idq6ry9hn4cqxbj34uy3a69yasnc1du4gzlm6ad2f5gndkngu1kyfux3mst294gygpagw8lmrntcvnsr2yetoszox6g5z21q7jl2jy87u7o6q67o3fj2vnaswq0qrom3vwi81vpmr4hwdkt03luowfzfv0bxghqmbnqgy6osg3f7fdq1zqyhaq7ca4e21tzslny0sj80g8x1mdpj48unj5a1rj3gy3lvn7mqbu1apwul2naar7v6r63002cz5t0ym9qkx7ydhe6nt7il59pzl7cn7uiugnnsvglwl7rltv8tb51l8lnet66nlanhfkxo58u2liedvdtjrjcriinjvpe07grhy2g35g43ohrhtftuoqni1xqg6qrv839nx7h0kupewfe35jgyujydh74cmqwevlsecz0bebd2akyqjb == \8\9\4\u\p\j\w\b\l\u\8\5\b\e\4\o\l\w\b\u\v\g\p\q\m\l\0\1\9\7\p\h\7\q\8\z\d\k\t\e\t\o\b\f\6\l\e\d\t\o\u\o\0\e\i\r\t\f\3\h\q\8\t\5\g\h\b\9\w\k\3\y\u\m\z\l\8\u\5\1\l\i\3\6\b\7\i\d\q\6\r\y\9\h\n\4\c\q\x\b\j\3\4\u\y\3\a\6\9\y\a\s\n\c\1\d\u\4\g\z\l\m\6\a\d\2\f\5\g\n\d\k\n\g\u\1\k\y\f\u\x\3\m\s\t\2\9\4\g\y\g\p\a\g\w\8\l\m\r\n\t\c\v\n\s\r\2\y\e\t\o\s\z\o\x\6\g\5\z\2\1\q\7\j\l\2\j\y\8\7\u\7\o\6\q\6\7\o\3\f\j\2\v\n\a\s\w\q\0\q\r\o\m\3\v\w\i\8\1\v\p\m\r\4\h\w\d\k\t\0\3\l\u\o\w\f\z\f\v\0\b\x\g\h\q\m\b\n\q\g\y\6\o\s\g\3\f\7\f\d\q\1\z\q\y\h\a\q\7\c\a\4\e\2\1\t\z\s\l\n\y\0\s\j\8\0\g\8\x\1\m\d\p\j\4\8\u\n\j\5\a\1\r\j\3\g\y\3\l\v\n\7\m\q\b\u\1\a\p\w\u\l\2\n\a\a\r\7\v\6\r\6\3\0\0\2\c\z\5\t\0\y\m\9\q\k\x\7\y\d\h\e\6\n\t\7\i\l\5\9\p\z\l\7\c\n\7\u\i\u\g\n\n\s\v\g\l\w\l\7\r\l\t\v\8\t\b\5\1\l\8\l\n\e\t\6\6\n\l\a\n\h\f\k\x\o\5\8\u\2\l\i\e\d\v\d\t\j\r\j\c\r\i\i\n\j\v\p\e\0\7\g\r\h\y\2\g\3\5\g\4\3\o\h\r\h\t\f\t\u\o\q\n\i\1\x\q\g\6\q\r\v\8\3\9\n\x\7\h\0\k\u\p\e\w\f\e\3\5\j\g\y\u\j\y\d\h\7\4\c\m\q\w\e\v\l\s\e\c\z\0\b\e\b\d\2\a\k\y\q\j\b ]] 00:06:50.679 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:50.679 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:50.679 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:50.679 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:50.679 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.679 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:50.937 [2024-07-15 19:44:44.941160] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:50.938 [2024-07-15 19:44:44.941257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63567 ] 00:06:50.938 [2024-07-15 19:44:45.074847] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.196 [2024-07-15 19:44:45.183721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.196 [2024-07-15 19:44:45.239227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.455  Copying: 512/512 [B] (average 500 kBps) 00:06:51.455 00:06:51.455 19:44:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bd08rblw7l8tkmldszohwnpc89us21e0zrzdumrj00m37hijkrrucud4fff88krcavz4d05t9ydc0buypko177qaaluivkf6kbaixv3kim0q5tchreqnjgqgwa9gfas8m4u2lw36wd9v15gem4jb31y6pjf6xmpo7qlptnt4iu6zq0kkmm9e5iwcx1optv0ueughxzkjrtmxcra8kt59jiyvibwoq178kubc2g25gktz52g3qh6t976s1cg0q03z4wwftmikqvi3fbs0pd9m2zb3saemx5cf78jp16zxdd6a7n0p86rotbwmget8wneuf9lpsbh33dm4585io338f53kpwlvycrw4yznzar9zeodj84txi9ru75u0kchntft99uocrlazylrdy9f9m4zapkyuqxfqusmiy3hr9736se4pdo7qk7ksa7zd3oqpxp4uhr5mkaout2a33bhm2xl2necxl99ykq6a28ybjjsi6b7y5lq2srd3ek8qojsu2vp == \b\d\0\8\r\b\l\w\7\l\8\t\k\m\l\d\s\z\o\h\w\n\p\c\8\9\u\s\2\1\e\0\z\r\z\d\u\m\r\j\0\0\m\3\7\h\i\j\k\r\r\u\c\u\d\4\f\f\f\8\8\k\r\c\a\v\z\4\d\0\5\t\9\y\d\c\0\b\u\y\p\k\o\1\7\7\q\a\a\l\u\i\v\k\f\6\k\b\a\i\x\v\3\k\i\m\0\q\5\t\c\h\r\e\q\n\j\g\q\g\w\a\9\g\f\a\s\8\m\4\u\2\l\w\3\6\w\d\9\v\1\5\g\e\m\4\j\b\3\1\y\6\p\j\f\6\x\m\p\o\7\q\l\p\t\n\t\4\i\u\6\z\q\0\k\k\m\m\9\e\5\i\w\c\x\1\o\p\t\v\0\u\e\u\g\h\x\z\k\j\r\t\m\x\c\r\a\8\k\t\5\9\j\i\y\v\i\b\w\o\q\1\7\8\k\u\b\c\2\g\2\5\g\k\t\z\5\2\g\3\q\h\6\t\9\7\6\s\1\c\g\0\q\0\3\z\4\w\w\f\t\m\i\k\q\v\i\3\f\b\s\0\p\d\9\m\2\z\b\3\s\a\e\m\x\5\c\f\7\8\j\p\1\6\z\x\d\d\6\a\7\n\0\p\8\6\r\o\t\b\w\m\g\e\t\8\w\n\e\u\f\9\l\p\s\b\h\3\3\d\m\4\5\8\5\i\o\3\3\8\f\5\3\k\p\w\l\v\y\c\r\w\4\y\z\n\z\a\r\9\z\e\o\d\j\8\4\t\x\i\9\r\u\7\5\u\0\k\c\h\n\t\f\t\9\9\u\o\c\r\l\a\z\y\l\r\d\y\9\f\9\m\4\z\a\p\k\y\u\q\x\f\q\u\s\m\i\y\3\h\r\9\7\3\6\s\e\4\p\d\o\7\q\k\7\k\s\a\7\z\d\3\o\q\p\x\p\4\u\h\r\5\m\k\a\o\u\t\2\a\3\3\b\h\m\2\x\l\2\n\e\c\x\l\9\9\y\k\q\6\a\2\8\y\b\j\j\s\i\6\b\7\y\5\l\q\2\s\r\d\3\e\k\8\q\o\j\s\u\2\v\p ]] 00:06:51.455 19:44:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.455 19:44:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:51.455 [2024-07-15 19:44:45.554256] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:51.455 [2024-07-15 19:44:45.554344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63580 ] 00:06:51.455 [2024-07-15 19:44:45.691905] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.714 [2024-07-15 19:44:45.809282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.714 [2024-07-15 19:44:45.869721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.973  Copying: 512/512 [B] (average 500 kBps) 00:06:51.973 00:06:51.973 19:44:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bd08rblw7l8tkmldszohwnpc89us21e0zrzdumrj00m37hijkrrucud4fff88krcavz4d05t9ydc0buypko177qaaluivkf6kbaixv3kim0q5tchreqnjgqgwa9gfas8m4u2lw36wd9v15gem4jb31y6pjf6xmpo7qlptnt4iu6zq0kkmm9e5iwcx1optv0ueughxzkjrtmxcra8kt59jiyvibwoq178kubc2g25gktz52g3qh6t976s1cg0q03z4wwftmikqvi3fbs0pd9m2zb3saemx5cf78jp16zxdd6a7n0p86rotbwmget8wneuf9lpsbh33dm4585io338f53kpwlvycrw4yznzar9zeodj84txi9ru75u0kchntft99uocrlazylrdy9f9m4zapkyuqxfqusmiy3hr9736se4pdo7qk7ksa7zd3oqpxp4uhr5mkaout2a33bhm2xl2necxl99ykq6a28ybjjsi6b7y5lq2srd3ek8qojsu2vp == \b\d\0\8\r\b\l\w\7\l\8\t\k\m\l\d\s\z\o\h\w\n\p\c\8\9\u\s\2\1\e\0\z\r\z\d\u\m\r\j\0\0\m\3\7\h\i\j\k\r\r\u\c\u\d\4\f\f\f\8\8\k\r\c\a\v\z\4\d\0\5\t\9\y\d\c\0\b\u\y\p\k\o\1\7\7\q\a\a\l\u\i\v\k\f\6\k\b\a\i\x\v\3\k\i\m\0\q\5\t\c\h\r\e\q\n\j\g\q\g\w\a\9\g\f\a\s\8\m\4\u\2\l\w\3\6\w\d\9\v\1\5\g\e\m\4\j\b\3\1\y\6\p\j\f\6\x\m\p\o\7\q\l\p\t\n\t\4\i\u\6\z\q\0\k\k\m\m\9\e\5\i\w\c\x\1\o\p\t\v\0\u\e\u\g\h\x\z\k\j\r\t\m\x\c\r\a\8\k\t\5\9\j\i\y\v\i\b\w\o\q\1\7\8\k\u\b\c\2\g\2\5\g\k\t\z\5\2\g\3\q\h\6\t\9\7\6\s\1\c\g\0\q\0\3\z\4\w\w\f\t\m\i\k\q\v\i\3\f\b\s\0\p\d\9\m\2\z\b\3\s\a\e\m\x\5\c\f\7\8\j\p\1\6\z\x\d\d\6\a\7\n\0\p\8\6\r\o\t\b\w\m\g\e\t\8\w\n\e\u\f\9\l\p\s\b\h\3\3\d\m\4\5\8\5\i\o\3\3\8\f\5\3\k\p\w\l\v\y\c\r\w\4\y\z\n\z\a\r\9\z\e\o\d\j\8\4\t\x\i\9\r\u\7\5\u\0\k\c\h\n\t\f\t\9\9\u\o\c\r\l\a\z\y\l\r\d\y\9\f\9\m\4\z\a\p\k\y\u\q\x\f\q\u\s\m\i\y\3\h\r\9\7\3\6\s\e\4\p\d\o\7\q\k\7\k\s\a\7\z\d\3\o\q\p\x\p\4\u\h\r\5\m\k\a\o\u\t\2\a\3\3\b\h\m\2\x\l\2\n\e\c\x\l\9\9\y\k\q\6\a\2\8\y\b\j\j\s\i\6\b\7\y\5\l\q\2\s\r\d\3\e\k\8\q\o\j\s\u\2\v\p ]] 00:06:51.973 19:44:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.973 19:44:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:51.973 [2024-07-15 19:44:46.193361] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:51.973 [2024-07-15 19:44:46.193481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63593 ] 00:06:52.232 [2024-07-15 19:44:46.327321] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.233 [2024-07-15 19:44:46.441914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.492 [2024-07-15 19:44:46.510619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.751  Copying: 512/512 [B] (average 125 kBps) 00:06:52.751 00:06:52.751 19:44:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bd08rblw7l8tkmldszohwnpc89us21e0zrzdumrj00m37hijkrrucud4fff88krcavz4d05t9ydc0buypko177qaaluivkf6kbaixv3kim0q5tchreqnjgqgwa9gfas8m4u2lw36wd9v15gem4jb31y6pjf6xmpo7qlptnt4iu6zq0kkmm9e5iwcx1optv0ueughxzkjrtmxcra8kt59jiyvibwoq178kubc2g25gktz52g3qh6t976s1cg0q03z4wwftmikqvi3fbs0pd9m2zb3saemx5cf78jp16zxdd6a7n0p86rotbwmget8wneuf9lpsbh33dm4585io338f53kpwlvycrw4yznzar9zeodj84txi9ru75u0kchntft99uocrlazylrdy9f9m4zapkyuqxfqusmiy3hr9736se4pdo7qk7ksa7zd3oqpxp4uhr5mkaout2a33bhm2xl2necxl99ykq6a28ybjjsi6b7y5lq2srd3ek8qojsu2vp == \b\d\0\8\r\b\l\w\7\l\8\t\k\m\l\d\s\z\o\h\w\n\p\c\8\9\u\s\2\1\e\0\z\r\z\d\u\m\r\j\0\0\m\3\7\h\i\j\k\r\r\u\c\u\d\4\f\f\f\8\8\k\r\c\a\v\z\4\d\0\5\t\9\y\d\c\0\b\u\y\p\k\o\1\7\7\q\a\a\l\u\i\v\k\f\6\k\b\a\i\x\v\3\k\i\m\0\q\5\t\c\h\r\e\q\n\j\g\q\g\w\a\9\g\f\a\s\8\m\4\u\2\l\w\3\6\w\d\9\v\1\5\g\e\m\4\j\b\3\1\y\6\p\j\f\6\x\m\p\o\7\q\l\p\t\n\t\4\i\u\6\z\q\0\k\k\m\m\9\e\5\i\w\c\x\1\o\p\t\v\0\u\e\u\g\h\x\z\k\j\r\t\m\x\c\r\a\8\k\t\5\9\j\i\y\v\i\b\w\o\q\1\7\8\k\u\b\c\2\g\2\5\g\k\t\z\5\2\g\3\q\h\6\t\9\7\6\s\1\c\g\0\q\0\3\z\4\w\w\f\t\m\i\k\q\v\i\3\f\b\s\0\p\d\9\m\2\z\b\3\s\a\e\m\x\5\c\f\7\8\j\p\1\6\z\x\d\d\6\a\7\n\0\p\8\6\r\o\t\b\w\m\g\e\t\8\w\n\e\u\f\9\l\p\s\b\h\3\3\d\m\4\5\8\5\i\o\3\3\8\f\5\3\k\p\w\l\v\y\c\r\w\4\y\z\n\z\a\r\9\z\e\o\d\j\8\4\t\x\i\9\r\u\7\5\u\0\k\c\h\n\t\f\t\9\9\u\o\c\r\l\a\z\y\l\r\d\y\9\f\9\m\4\z\a\p\k\y\u\q\x\f\q\u\s\m\i\y\3\h\r\9\7\3\6\s\e\4\p\d\o\7\q\k\7\k\s\a\7\z\d\3\o\q\p\x\p\4\u\h\r\5\m\k\a\o\u\t\2\a\3\3\b\h\m\2\x\l\2\n\e\c\x\l\9\9\y\k\q\6\a\2\8\y\b\j\j\s\i\6\b\7\y\5\l\q\2\s\r\d\3\e\k\8\q\o\j\s\u\2\v\p ]] 00:06:52.751 19:44:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.751 19:44:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:52.751 [2024-07-15 19:44:46.962982] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:52.752 [2024-07-15 19:44:46.963118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63595 ] 00:06:53.011 [2024-07-15 19:44:47.095493] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.011 [2024-07-15 19:44:47.247531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.270 [2024-07-15 19:44:47.321602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.528  Copying: 512/512 [B] (average 500 kBps) 00:06:53.528 00:06:53.529 19:44:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bd08rblw7l8tkmldszohwnpc89us21e0zrzdumrj00m37hijkrrucud4fff88krcavz4d05t9ydc0buypko177qaaluivkf6kbaixv3kim0q5tchreqnjgqgwa9gfas8m4u2lw36wd9v15gem4jb31y6pjf6xmpo7qlptnt4iu6zq0kkmm9e5iwcx1optv0ueughxzkjrtmxcra8kt59jiyvibwoq178kubc2g25gktz52g3qh6t976s1cg0q03z4wwftmikqvi3fbs0pd9m2zb3saemx5cf78jp16zxdd6a7n0p86rotbwmget8wneuf9lpsbh33dm4585io338f53kpwlvycrw4yznzar9zeodj84txi9ru75u0kchntft99uocrlazylrdy9f9m4zapkyuqxfqusmiy3hr9736se4pdo7qk7ksa7zd3oqpxp4uhr5mkaout2a33bhm2xl2necxl99ykq6a28ybjjsi6b7y5lq2srd3ek8qojsu2vp == \b\d\0\8\r\b\l\w\7\l\8\t\k\m\l\d\s\z\o\h\w\n\p\c\8\9\u\s\2\1\e\0\z\r\z\d\u\m\r\j\0\0\m\3\7\h\i\j\k\r\r\u\c\u\d\4\f\f\f\8\8\k\r\c\a\v\z\4\d\0\5\t\9\y\d\c\0\b\u\y\p\k\o\1\7\7\q\a\a\l\u\i\v\k\f\6\k\b\a\i\x\v\3\k\i\m\0\q\5\t\c\h\r\e\q\n\j\g\q\g\w\a\9\g\f\a\s\8\m\4\u\2\l\w\3\6\w\d\9\v\1\5\g\e\m\4\j\b\3\1\y\6\p\j\f\6\x\m\p\o\7\q\l\p\t\n\t\4\i\u\6\z\q\0\k\k\m\m\9\e\5\i\w\c\x\1\o\p\t\v\0\u\e\u\g\h\x\z\k\j\r\t\m\x\c\r\a\8\k\t\5\9\j\i\y\v\i\b\w\o\q\1\7\8\k\u\b\c\2\g\2\5\g\k\t\z\5\2\g\3\q\h\6\t\9\7\6\s\1\c\g\0\q\0\3\z\4\w\w\f\t\m\i\k\q\v\i\3\f\b\s\0\p\d\9\m\2\z\b\3\s\a\e\m\x\5\c\f\7\8\j\p\1\6\z\x\d\d\6\a\7\n\0\p\8\6\r\o\t\b\w\m\g\e\t\8\w\n\e\u\f\9\l\p\s\b\h\3\3\d\m\4\5\8\5\i\o\3\3\8\f\5\3\k\p\w\l\v\y\c\r\w\4\y\z\n\z\a\r\9\z\e\o\d\j\8\4\t\x\i\9\r\u\7\5\u\0\k\c\h\n\t\f\t\9\9\u\o\c\r\l\a\z\y\l\r\d\y\9\f\9\m\4\z\a\p\k\y\u\q\x\f\q\u\s\m\i\y\3\h\r\9\7\3\6\s\e\4\p\d\o\7\q\k\7\k\s\a\7\z\d\3\o\q\p\x\p\4\u\h\r\5\m\k\a\o\u\t\2\a\3\3\b\h\m\2\x\l\2\n\e\c\x\l\9\9\y\k\q\6\a\2\8\y\b\j\j\s\i\6\b\7\y\5\l\q\2\s\r\d\3\e\k\8\q\o\j\s\u\2\v\p ]] 00:06:53.529 00:06:53.529 real 0m5.531s 00:06:53.529 user 0m3.232s 00:06:53.529 sys 0m1.302s 00:06:53.529 19:44:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.529 19:44:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:53.529 ************************************ 00:06:53.529 END TEST dd_flags_misc_forced_aio 00:06:53.529 ************************************ 00:06:53.529 19:44:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:53.529 19:44:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:53.529 19:44:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:53.529 19:44:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:53.529 ************************************ 00:06:53.529 END TEST spdk_dd_posix 00:06:53.529 ************************************ 00:06:53.529 00:06:53.529 real 0m22.877s 00:06:53.529 user 0m11.937s 00:06:53.529 sys 0m6.823s 00:06:53.529 19:44:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.529 19:44:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:53.852 19:44:47 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:53.852 19:44:47 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:53.852 19:44:47 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.852 19:44:47 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.852 19:44:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:53.852 ************************************ 00:06:53.852 START TEST spdk_dd_malloc 00:06:53.852 ************************************ 00:06:53.852 19:44:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:53.852 * Looking for test storage... 00:06:53.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:53.852 19:44:47 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.852 19:44:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.852 19:44:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.852 19:44:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.852 19:44:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.852 19:44:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:53.853 ************************************ 00:06:53.853 START TEST dd_malloc_copy 00:06:53.853 ************************************ 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:53.853 19:44:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.853 { 00:06:53.853 "subsystems": [ 00:06:53.853 { 00:06:53.853 "subsystem": "bdev", 00:06:53.853 "config": [ 00:06:53.853 { 00:06:53.853 "params": { 00:06:53.853 "block_size": 512, 00:06:53.853 "num_blocks": 1048576, 00:06:53.853 "name": "malloc0" 00:06:53.853 }, 00:06:53.853 "method": "bdev_malloc_create" 00:06:53.853 }, 00:06:53.853 { 00:06:53.853 "params": { 00:06:53.853 "block_size": 512, 00:06:53.853 "num_blocks": 1048576, 00:06:53.853 "name": "malloc1" 00:06:53.853 }, 00:06:53.853 "method": "bdev_malloc_create" 00:06:53.853 }, 00:06:53.853 { 00:06:53.853 "method": "bdev_wait_for_examine" 00:06:53.853 } 00:06:53.853 ] 00:06:53.853 } 00:06:53.853 ] 00:06:53.853 } 00:06:53.853 [2024-07-15 19:44:47.959095] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:53.853 [2024-07-15 19:44:47.959226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63669 ] 00:06:54.112 [2024-07-15 19:44:48.100574] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.112 [2024-07-15 19:44:48.233297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.112 [2024-07-15 19:44:48.309095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.087  Copying: 213/512 [MB] (213 MBps) Copying: 403/512 [MB] (190 MBps) Copying: 512/512 [MB] (average 200 MBps) 00:06:58.087 00:06:58.087 19:44:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:58.087 19:44:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:58.087 19:44:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:58.087 19:44:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.087 [2024-07-15 19:44:52.038385] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:58.087 [2024-07-15 19:44:52.038759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63722 ] 00:06:58.087 { 00:06:58.087 "subsystems": [ 00:06:58.087 { 00:06:58.087 "subsystem": "bdev", 00:06:58.087 "config": [ 00:06:58.087 { 00:06:58.087 "params": { 00:06:58.087 "block_size": 512, 00:06:58.087 "num_blocks": 1048576, 00:06:58.087 "name": "malloc0" 00:06:58.087 }, 00:06:58.087 "method": "bdev_malloc_create" 00:06:58.087 }, 00:06:58.087 { 00:06:58.087 "params": { 00:06:58.087 "block_size": 512, 00:06:58.087 "num_blocks": 1048576, 00:06:58.087 "name": "malloc1" 00:06:58.087 }, 00:06:58.087 "method": "bdev_malloc_create" 00:06:58.087 }, 00:06:58.087 { 00:06:58.087 "method": "bdev_wait_for_examine" 00:06:58.087 } 00:06:58.087 ] 00:06:58.087 } 00:06:58.087 ] 00:06:58.087 } 00:06:58.087 [2024-07-15 19:44:52.183043] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.087 [2024-07-15 19:44:52.285804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.346 [2024-07-15 19:44:52.360029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.786  Copying: 199/512 [MB] (199 MBps) Copying: 407/512 [MB] (207 MBps) Copying: 512/512 [MB] (average 205 MBps) 00:07:01.786 00:07:01.786 00:07:01.786 real 0m8.019s 00:07:01.786 user 0m6.711s 00:07:01.786 sys 0m1.138s 00:07:01.787 19:44:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.787 19:44:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.787 ************************************ 00:07:01.787 END TEST dd_malloc_copy 00:07:01.787 ************************************ 00:07:01.787 19:44:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:01.787 ************************************ 00:07:01.787 END TEST spdk_dd_malloc 00:07:01.787 ************************************ 00:07:01.787 00:07:01.787 real 0m8.152s 00:07:01.787 user 0m6.769s 00:07:01.787 sys 0m1.214s 00:07:01.787 19:44:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.787 19:44:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:01.787 19:44:55 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:01.787 19:44:55 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:01.787 19:44:55 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:01.787 19:44:55 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.787 19:44:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:01.787 ************************************ 00:07:01.787 START TEST spdk_dd_bdev_to_bdev 00:07:01.787 ************************************ 00:07:01.787 19:44:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:02.045 * Looking for test storage... 00:07:02.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:02.045 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:02.046 ************************************ 00:07:02.046 START TEST dd_inflate_file 00:07:02.046 ************************************ 00:07:02.046 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:02.046 [2024-07-15 19:44:56.142516] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:02.046 [2024-07-15 19:44:56.142861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63832 ] 00:07:02.046 [2024-07-15 19:44:56.278987] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.304 [2024-07-15 19:44:56.380250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.304 [2024-07-15 19:44:56.437171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.581  Copying: 64/64 [MB] (average 1828 MBps) 00:07:02.581 00:07:02.581 00:07:02.581 real 0m0.641s 00:07:02.581 user 0m0.381s 00:07:02.581 sys 0m0.305s 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:02.581 ************************************ 00:07:02.581 END TEST dd_inflate_file 00:07:02.581 ************************************ 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:02.581 ************************************ 00:07:02.581 START TEST dd_copy_to_out_bdev 00:07:02.581 ************************************ 00:07:02.581 19:44:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:02.894 [2024-07-15 19:44:56.839874] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:02.894 [2024-07-15 19:44:56.839980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63870 ] 00:07:02.894 { 00:07:02.894 "subsystems": [ 00:07:02.894 { 00:07:02.894 "subsystem": "bdev", 00:07:02.895 "config": [ 00:07:02.895 { 00:07:02.895 "params": { 00:07:02.895 "trtype": "pcie", 00:07:02.895 "traddr": "0000:00:10.0", 00:07:02.895 "name": "Nvme0" 00:07:02.895 }, 00:07:02.895 "method": "bdev_nvme_attach_controller" 00:07:02.895 }, 00:07:02.895 { 00:07:02.895 "params": { 00:07:02.895 "trtype": "pcie", 00:07:02.895 "traddr": "0000:00:11.0", 00:07:02.895 "name": "Nvme1" 00:07:02.895 }, 00:07:02.895 "method": "bdev_nvme_attach_controller" 00:07:02.895 }, 00:07:02.895 { 00:07:02.895 "method": "bdev_wait_for_examine" 00:07:02.895 } 00:07:02.895 ] 00:07:02.895 } 00:07:02.895 ] 00:07:02.895 } 00:07:02.895 [2024-07-15 19:44:56.979045] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.895 [2024-07-15 19:44:57.089554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.153 [2024-07-15 19:44:57.144743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.605  Copying: 52/64 [MB] (52 MBps) Copying: 64/64 [MB] (average 52 MBps) 00:07:04.605 00:07:04.605 00:07:04.605 real 0m2.016s 00:07:04.605 user 0m1.786s 00:07:04.605 sys 0m1.564s 00:07:04.605 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.605 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.605 ************************************ 00:07:04.605 END TEST dd_copy_to_out_bdev 00:07:04.605 ************************************ 00:07:04.605 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:04.605 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:04.605 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:04.605 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.605 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.605 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.864 ************************************ 00:07:04.864 START TEST dd_offset_magic 00:07:04.864 ************************************ 00:07:04.864 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:04.864 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:04.864 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:04.864 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:04.864 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:04.864 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:04.864 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:04.864 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:04.864 19:44:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:04.864 [2024-07-15 19:44:58.909183] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:04.864 [2024-07-15 19:44:58.909324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63916 ] 00:07:04.864 { 00:07:04.864 "subsystems": [ 00:07:04.864 { 00:07:04.864 "subsystem": "bdev", 00:07:04.864 "config": [ 00:07:04.864 { 00:07:04.864 "params": { 00:07:04.864 "trtype": "pcie", 00:07:04.864 "traddr": "0000:00:10.0", 00:07:04.864 "name": "Nvme0" 00:07:04.864 }, 00:07:04.864 "method": "bdev_nvme_attach_controller" 00:07:04.864 }, 00:07:04.864 { 00:07:04.864 "params": { 00:07:04.864 "trtype": "pcie", 00:07:04.864 "traddr": "0000:00:11.0", 00:07:04.864 "name": "Nvme1" 00:07:04.864 }, 00:07:04.864 "method": "bdev_nvme_attach_controller" 00:07:04.864 }, 00:07:04.864 { 00:07:04.864 "method": "bdev_wait_for_examine" 00:07:04.864 } 00:07:04.864 ] 00:07:04.864 } 00:07:04.864 ] 00:07:04.864 } 00:07:04.864 [2024-07-15 19:44:59.045810] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.123 [2024-07-15 19:44:59.158448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.123 [2024-07-15 19:44:59.212698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.639  Copying: 65/65 [MB] (average 833 MBps) 00:07:05.639 00:07:05.639 19:44:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:05.639 19:44:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:05.639 19:44:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:05.639 19:44:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:05.639 [2024-07-15 19:44:59.777117] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:05.639 [2024-07-15 19:44:59.777212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63931 ] 00:07:05.639 { 00:07:05.639 "subsystems": [ 00:07:05.639 { 00:07:05.639 "subsystem": "bdev", 00:07:05.639 "config": [ 00:07:05.639 { 00:07:05.639 "params": { 00:07:05.639 "trtype": "pcie", 00:07:05.639 "traddr": "0000:00:10.0", 00:07:05.639 "name": "Nvme0" 00:07:05.639 }, 00:07:05.639 "method": "bdev_nvme_attach_controller" 00:07:05.639 }, 00:07:05.639 { 00:07:05.639 "params": { 00:07:05.639 "trtype": "pcie", 00:07:05.639 "traddr": "0000:00:11.0", 00:07:05.639 "name": "Nvme1" 00:07:05.639 }, 00:07:05.639 "method": "bdev_nvme_attach_controller" 00:07:05.639 }, 00:07:05.639 { 00:07:05.639 "method": "bdev_wait_for_examine" 00:07:05.639 } 00:07:05.639 ] 00:07:05.639 } 00:07:05.639 ] 00:07:05.639 } 00:07:05.897 [2024-07-15 19:44:59.913739] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.897 [2024-07-15 19:45:00.010239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.897 [2024-07-15 19:45:00.065041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.463  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:06.463 00:07:06.463 19:45:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:06.463 19:45:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:06.463 19:45:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:06.463 19:45:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:06.463 19:45:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:06.463 19:45:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:06.463 19:45:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:06.463 [2024-07-15 19:45:00.522998] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:06.463 [2024-07-15 19:45:00.523124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63953 ] 00:07:06.463 { 00:07:06.463 "subsystems": [ 00:07:06.463 { 00:07:06.463 "subsystem": "bdev", 00:07:06.463 "config": [ 00:07:06.463 { 00:07:06.463 "params": { 00:07:06.463 "trtype": "pcie", 00:07:06.463 "traddr": "0000:00:10.0", 00:07:06.463 "name": "Nvme0" 00:07:06.463 }, 00:07:06.463 "method": "bdev_nvme_attach_controller" 00:07:06.463 }, 00:07:06.463 { 00:07:06.463 "params": { 00:07:06.463 "trtype": "pcie", 00:07:06.463 "traddr": "0000:00:11.0", 00:07:06.463 "name": "Nvme1" 00:07:06.463 }, 00:07:06.463 "method": "bdev_nvme_attach_controller" 00:07:06.463 }, 00:07:06.463 { 00:07:06.463 "method": "bdev_wait_for_examine" 00:07:06.463 } 00:07:06.463 ] 00:07:06.463 } 00:07:06.463 ] 00:07:06.463 } 00:07:06.463 [2024-07-15 19:45:00.661758] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.722 [2024-07-15 19:45:00.787001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.722 [2024-07-15 19:45:00.845481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.239  Copying: 65/65 [MB] (average 890 MBps) 00:07:07.239 00:07:07.239 19:45:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:07.239 19:45:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:07.239 19:45:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:07.239 19:45:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:07.239 [2024-07-15 19:45:01.431455] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:07.239 [2024-07-15 19:45:01.431578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63973 ] 00:07:07.239 { 00:07:07.239 "subsystems": [ 00:07:07.239 { 00:07:07.239 "subsystem": "bdev", 00:07:07.239 "config": [ 00:07:07.239 { 00:07:07.239 "params": { 00:07:07.239 "trtype": "pcie", 00:07:07.239 "traddr": "0000:00:10.0", 00:07:07.239 "name": "Nvme0" 00:07:07.239 }, 00:07:07.239 "method": "bdev_nvme_attach_controller" 00:07:07.239 }, 00:07:07.239 { 00:07:07.240 "params": { 00:07:07.240 "trtype": "pcie", 00:07:07.240 "traddr": "0000:00:11.0", 00:07:07.240 "name": "Nvme1" 00:07:07.240 }, 00:07:07.240 "method": "bdev_nvme_attach_controller" 00:07:07.240 }, 00:07:07.240 { 00:07:07.240 "method": "bdev_wait_for_examine" 00:07:07.240 } 00:07:07.240 ] 00:07:07.240 } 00:07:07.240 ] 00:07:07.240 } 00:07:07.497 [2024-07-15 19:45:01.573321] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.497 [2024-07-15 19:45:01.684283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.497 [2024-07-15 19:45:01.739335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.013  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:08.013 00:07:08.013 ************************************ 00:07:08.013 END TEST dd_offset_magic 00:07:08.013 ************************************ 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:08.013 00:07:08.013 real 0m3.279s 00:07:08.013 user 0m2.396s 00:07:08.013 sys 0m0.952s 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:08.013 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:08.013 [2024-07-15 19:45:02.227706] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:08.013 [2024-07-15 19:45:02.227808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64004 ] 00:07:08.013 { 00:07:08.013 "subsystems": [ 00:07:08.013 { 00:07:08.013 "subsystem": "bdev", 00:07:08.013 "config": [ 00:07:08.013 { 00:07:08.013 "params": { 00:07:08.013 "trtype": "pcie", 00:07:08.013 "traddr": "0000:00:10.0", 00:07:08.013 "name": "Nvme0" 00:07:08.013 }, 00:07:08.013 "method": "bdev_nvme_attach_controller" 00:07:08.013 }, 00:07:08.013 { 00:07:08.013 "params": { 00:07:08.013 "trtype": "pcie", 00:07:08.013 "traddr": "0000:00:11.0", 00:07:08.013 "name": "Nvme1" 00:07:08.013 }, 00:07:08.013 "method": "bdev_nvme_attach_controller" 00:07:08.013 }, 00:07:08.013 { 00:07:08.014 "method": "bdev_wait_for_examine" 00:07:08.014 } 00:07:08.014 ] 00:07:08.014 } 00:07:08.014 ] 00:07:08.014 } 00:07:08.272 [2024-07-15 19:45:02.368598] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.272 [2024-07-15 19:45:02.493155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.530 [2024-07-15 19:45:02.551825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.788  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:08.788 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:08.788 19:45:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:08.788 [2024-07-15 19:45:03.004157] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:08.788 [2024-07-15 19:45:03.004267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64020 ] 00:07:08.788 { 00:07:08.788 "subsystems": [ 00:07:08.788 { 00:07:08.788 "subsystem": "bdev", 00:07:08.788 "config": [ 00:07:08.788 { 00:07:08.788 "params": { 00:07:08.788 "trtype": "pcie", 00:07:08.788 "traddr": "0000:00:10.0", 00:07:08.788 "name": "Nvme0" 00:07:08.788 }, 00:07:08.788 "method": "bdev_nvme_attach_controller" 00:07:08.788 }, 00:07:08.788 { 00:07:08.788 "params": { 00:07:08.788 "trtype": "pcie", 00:07:08.788 "traddr": "0000:00:11.0", 00:07:08.788 "name": "Nvme1" 00:07:08.788 }, 00:07:08.788 "method": "bdev_nvme_attach_controller" 00:07:08.788 }, 00:07:08.788 { 00:07:08.789 "method": "bdev_wait_for_examine" 00:07:08.789 } 00:07:08.789 ] 00:07:08.789 } 00:07:08.789 ] 00:07:08.789 } 00:07:09.047 [2024-07-15 19:45:03.144413] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.047 [2024-07-15 19:45:03.225027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.047 [2024-07-15 19:45:03.277791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.562  Copying: 5120/5120 [kB] (average 714 MBps) 00:07:09.562 00:07:09.562 19:45:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:09.562 ************************************ 00:07:09.562 END TEST spdk_dd_bdev_to_bdev 00:07:09.562 ************************************ 00:07:09.562 00:07:09.562 real 0m7.713s 00:07:09.562 user 0m5.734s 00:07:09.562 sys 0m3.524s 00:07:09.562 19:45:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.562 19:45:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:09.562 19:45:03 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:09.562 19:45:03 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:09.562 19:45:03 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:09.562 19:45:03 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.562 19:45:03 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.562 19:45:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:09.562 ************************************ 00:07:09.562 START TEST spdk_dd_uring 00:07:09.562 ************************************ 00:07:09.562 19:45:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:09.821 * Looking for test storage... 00:07:09.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:09.821 ************************************ 00:07:09.821 START TEST dd_uring_copy 00:07:09.821 ************************************ 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:09.821 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=tc1r9cmuxmoz2agahiop1fe4xt3b7rz2tvxlm079du8hg2401rh50axawcfp4e98gyy7idpqlg9lvhsscqms7bihy10kai4p94nmoc9sacbftzu1s3wfh24g6mr851gbcqtduilmso963g40x8thvnun8ypct80fnob3mec01i6wv28bhw88ikzppj8u9jn7zevqf326dpk98n9l1a5mbs5qkqsqeisjpf68y7g7enif7zwu03cx00zb9n8bqa0fai2zd5ao5np8qa9s43wsner0jxrtr3jqburkzbnubdvsj3jls7t0zqnlfxismt9dwk6f8vd6srigp9vzje0j2jqkg49jl69jq23jb5dbvde8oud8z2ia99s7r37ako88a02sfjy1obat2yroezseedp4gw1vn592uz58y18qo1xzmg1wiq6zzh9pnq732wdbysl5qd53zyqbs9vbbf9u1u2jge3mic9068g838ervztwps5dg2knt3akhq8skh9nugf12suy90nzkuztlx6alezaultgv8db6apmzjzinwsir5osmlde0n2ojla7pmw04ut9m2a3ahhfqonivsm6g6e8aklaysyzhwi8dys1chepisxavdj7fngkhw11qoz7ls25b4758grtqbnk4uc70mgvqtj05ifm5i93cb1jmbs5hjqjuzu57cwgw45ug9ncsctxglajpp81qkpfmun0cqf53zqx9mtjsot5py12p7he2yhq01sfvr30u366ngo9u2lptttwxf36ixihr78fki1rtrm1wo8r04l1w4jxk2p619rqzedegvpwhezhft6w9g85djlr0xm4ou4hr9two0din2ybp7ia4cn4rfryqgoaf593r38imu1dn71sust1naxogibz3xu2tmmy3dw2hewujtl07m67k465l0wl6uiga58u2vjyvm5hjegkkrdddessdouhz5dgbm2cyzjyg8q80cvcusd0a9bt0ck5ka5uoyrbju1xw54isvor0c9h 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo tc1r9cmuxmoz2agahiop1fe4xt3b7rz2tvxlm079du8hg2401rh50axawcfp4e98gyy7idpqlg9lvhsscqms7bihy10kai4p94nmoc9sacbftzu1s3wfh24g6mr851gbcqtduilmso963g40x8thvnun8ypct80fnob3mec01i6wv28bhw88ikzppj8u9jn7zevqf326dpk98n9l1a5mbs5qkqsqeisjpf68y7g7enif7zwu03cx00zb9n8bqa0fai2zd5ao5np8qa9s43wsner0jxrtr3jqburkzbnubdvsj3jls7t0zqnlfxismt9dwk6f8vd6srigp9vzje0j2jqkg49jl69jq23jb5dbvde8oud8z2ia99s7r37ako88a02sfjy1obat2yroezseedp4gw1vn592uz58y18qo1xzmg1wiq6zzh9pnq732wdbysl5qd53zyqbs9vbbf9u1u2jge3mic9068g838ervztwps5dg2knt3akhq8skh9nugf12suy90nzkuztlx6alezaultgv8db6apmzjzinwsir5osmlde0n2ojla7pmw04ut9m2a3ahhfqonivsm6g6e8aklaysyzhwi8dys1chepisxavdj7fngkhw11qoz7ls25b4758grtqbnk4uc70mgvqtj05ifm5i93cb1jmbs5hjqjuzu57cwgw45ug9ncsctxglajpp81qkpfmun0cqf53zqx9mtjsot5py12p7he2yhq01sfvr30u366ngo9u2lptttwxf36ixihr78fki1rtrm1wo8r04l1w4jxk2p619rqzedegvpwhezhft6w9g85djlr0xm4ou4hr9two0din2ybp7ia4cn4rfryqgoaf593r38imu1dn71sust1naxogibz3xu2tmmy3dw2hewujtl07m67k465l0wl6uiga58u2vjyvm5hjegkkrdddessdouhz5dgbm2cyzjyg8q80cvcusd0a9bt0ck5ka5uoyrbju1xw54isvor0c9h 00:07:09.822 19:45:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:09.822 [2024-07-15 19:45:03.930418] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:09.822 [2024-07-15 19:45:03.930510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64090 ] 00:07:09.822 [2024-07-15 19:45:04.064755] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.080 [2024-07-15 19:45:04.178593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.080 [2024-07-15 19:45:04.231446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.221  Copying: 511/511 [MB] (average 1410 MBps) 00:07:11.221 00:07:11.221 19:45:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:11.221 19:45:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:11.221 19:45:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:11.221 19:45:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:11.221 [2024-07-15 19:45:05.283293] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:11.221 [2024-07-15 19:45:05.283651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64111 ] 00:07:11.221 { 00:07:11.221 "subsystems": [ 00:07:11.221 { 00:07:11.221 "subsystem": "bdev", 00:07:11.221 "config": [ 00:07:11.221 { 00:07:11.221 "params": { 00:07:11.221 "block_size": 512, 00:07:11.221 "num_blocks": 1048576, 00:07:11.221 "name": "malloc0" 00:07:11.221 }, 00:07:11.221 "method": "bdev_malloc_create" 00:07:11.221 }, 00:07:11.221 { 00:07:11.221 "params": { 00:07:11.221 "filename": "/dev/zram1", 00:07:11.221 "name": "uring0" 00:07:11.221 }, 00:07:11.221 "method": "bdev_uring_create" 00:07:11.221 }, 00:07:11.221 { 00:07:11.221 "method": "bdev_wait_for_examine" 00:07:11.221 } 00:07:11.221 ] 00:07:11.221 } 00:07:11.221 ] 00:07:11.221 } 00:07:11.221 [2024-07-15 19:45:05.419588] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.484 [2024-07-15 19:45:05.527234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.484 [2024-07-15 19:45:05.583052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.359  Copying: 231/512 [MB] (231 MBps) Copying: 464/512 [MB] (232 MBps) Copying: 512/512 [MB] (average 230 MBps) 00:07:14.359 00:07:14.359 19:45:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:14.359 19:45:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:14.359 19:45:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:14.359 19:45:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.359 [2024-07-15 19:45:08.493812] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:14.359 [2024-07-15 19:45:08.493913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64155 ] 00:07:14.359 { 00:07:14.359 "subsystems": [ 00:07:14.359 { 00:07:14.359 "subsystem": "bdev", 00:07:14.359 "config": [ 00:07:14.359 { 00:07:14.359 "params": { 00:07:14.359 "block_size": 512, 00:07:14.359 "num_blocks": 1048576, 00:07:14.359 "name": "malloc0" 00:07:14.359 }, 00:07:14.359 "method": "bdev_malloc_create" 00:07:14.359 }, 00:07:14.359 { 00:07:14.359 "params": { 00:07:14.359 "filename": "/dev/zram1", 00:07:14.359 "name": "uring0" 00:07:14.359 }, 00:07:14.359 "method": "bdev_uring_create" 00:07:14.359 }, 00:07:14.359 { 00:07:14.359 "method": "bdev_wait_for_examine" 00:07:14.359 } 00:07:14.359 ] 00:07:14.359 } 00:07:14.359 ] 00:07:14.359 } 00:07:14.617 [2024-07-15 19:45:08.630092] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.617 [2024-07-15 19:45:08.746432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.617 [2024-07-15 19:45:08.802021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.117  Copying: 186/512 [MB] (186 MBps) Copying: 357/512 [MB] (171 MBps) Copying: 512/512 [MB] (average 179 MBps) 00:07:18.117 00:07:18.117 19:45:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:18.117 19:45:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ tc1r9cmuxmoz2agahiop1fe4xt3b7rz2tvxlm079du8hg2401rh50axawcfp4e98gyy7idpqlg9lvhsscqms7bihy10kai4p94nmoc9sacbftzu1s3wfh24g6mr851gbcqtduilmso963g40x8thvnun8ypct80fnob3mec01i6wv28bhw88ikzppj8u9jn7zevqf326dpk98n9l1a5mbs5qkqsqeisjpf68y7g7enif7zwu03cx00zb9n8bqa0fai2zd5ao5np8qa9s43wsner0jxrtr3jqburkzbnubdvsj3jls7t0zqnlfxismt9dwk6f8vd6srigp9vzje0j2jqkg49jl69jq23jb5dbvde8oud8z2ia99s7r37ako88a02sfjy1obat2yroezseedp4gw1vn592uz58y18qo1xzmg1wiq6zzh9pnq732wdbysl5qd53zyqbs9vbbf9u1u2jge3mic9068g838ervztwps5dg2knt3akhq8skh9nugf12suy90nzkuztlx6alezaultgv8db6apmzjzinwsir5osmlde0n2ojla7pmw04ut9m2a3ahhfqonivsm6g6e8aklaysyzhwi8dys1chepisxavdj7fngkhw11qoz7ls25b4758grtqbnk4uc70mgvqtj05ifm5i93cb1jmbs5hjqjuzu57cwgw45ug9ncsctxglajpp81qkpfmun0cqf53zqx9mtjsot5py12p7he2yhq01sfvr30u366ngo9u2lptttwxf36ixihr78fki1rtrm1wo8r04l1w4jxk2p619rqzedegvpwhezhft6w9g85djlr0xm4ou4hr9two0din2ybp7ia4cn4rfryqgoaf593r38imu1dn71sust1naxogibz3xu2tmmy3dw2hewujtl07m67k465l0wl6uiga58u2vjyvm5hjegkkrdddessdouhz5dgbm2cyzjyg8q80cvcusd0a9bt0ck5ka5uoyrbju1xw54isvor0c9h == \t\c\1\r\9\c\m\u\x\m\o\z\2\a\g\a\h\i\o\p\1\f\e\4\x\t\3\b\7\r\z\2\t\v\x\l\m\0\7\9\d\u\8\h\g\2\4\0\1\r\h\5\0\a\x\a\w\c\f\p\4\e\9\8\g\y\y\7\i\d\p\q\l\g\9\l\v\h\s\s\c\q\m\s\7\b\i\h\y\1\0\k\a\i\4\p\9\4\n\m\o\c\9\s\a\c\b\f\t\z\u\1\s\3\w\f\h\2\4\g\6\m\r\8\5\1\g\b\c\q\t\d\u\i\l\m\s\o\9\6\3\g\4\0\x\8\t\h\v\n\u\n\8\y\p\c\t\8\0\f\n\o\b\3\m\e\c\0\1\i\6\w\v\2\8\b\h\w\8\8\i\k\z\p\p\j\8\u\9\j\n\7\z\e\v\q\f\3\2\6\d\p\k\9\8\n\9\l\1\a\5\m\b\s\5\q\k\q\s\q\e\i\s\j\p\f\6\8\y\7\g\7\e\n\i\f\7\z\w\u\0\3\c\x\0\0\z\b\9\n\8\b\q\a\0\f\a\i\2\z\d\5\a\o\5\n\p\8\q\a\9\s\4\3\w\s\n\e\r\0\j\x\r\t\r\3\j\q\b\u\r\k\z\b\n\u\b\d\v\s\j\3\j\l\s\7\t\0\z\q\n\l\f\x\i\s\m\t\9\d\w\k\6\f\8\v\d\6\s\r\i\g\p\9\v\z\j\e\0\j\2\j\q\k\g\4\9\j\l\6\9\j\q\2\3\j\b\5\d\b\v\d\e\8\o\u\d\8\z\2\i\a\9\9\s\7\r\3\7\a\k\o\8\8\a\0\2\s\f\j\y\1\o\b\a\t\2\y\r\o\e\z\s\e\e\d\p\4\g\w\1\v\n\5\9\2\u\z\5\8\y\1\8\q\o\1\x\z\m\g\1\w\i\q\6\z\z\h\9\p\n\q\7\3\2\w\d\b\y\s\l\5\q\d\5\3\z\y\q\b\s\9\v\b\b\f\9\u\1\u\2\j\g\e\3\m\i\c\9\0\6\8\g\8\3\8\e\r\v\z\t\w\p\s\5\d\g\2\k\n\t\3\a\k\h\q\8\s\k\h\9\n\u\g\f\1\2\s\u\y\9\0\n\z\k\u\z\t\l\x\6\a\l\e\z\a\u\l\t\g\v\8\d\b\6\a\p\m\z\j\z\i\n\w\s\i\r\5\o\s\m\l\d\e\0\n\2\o\j\l\a\7\p\m\w\0\4\u\t\9\m\2\a\3\a\h\h\f\q\o\n\i\v\s\m\6\g\6\e\8\a\k\l\a\y\s\y\z\h\w\i\8\d\y\s\1\c\h\e\p\i\s\x\a\v\d\j\7\f\n\g\k\h\w\1\1\q\o\z\7\l\s\2\5\b\4\7\5\8\g\r\t\q\b\n\k\4\u\c\7\0\m\g\v\q\t\j\0\5\i\f\m\5\i\9\3\c\b\1\j\m\b\s\5\h\j\q\j\u\z\u\5\7\c\w\g\w\4\5\u\g\9\n\c\s\c\t\x\g\l\a\j\p\p\8\1\q\k\p\f\m\u\n\0\c\q\f\5\3\z\q\x\9\m\t\j\s\o\t\5\p\y\1\2\p\7\h\e\2\y\h\q\0\1\s\f\v\r\3\0\u\3\6\6\n\g\o\9\u\2\l\p\t\t\t\w\x\f\3\6\i\x\i\h\r\7\8\f\k\i\1\r\t\r\m\1\w\o\8\r\0\4\l\1\w\4\j\x\k\2\p\6\1\9\r\q\z\e\d\e\g\v\p\w\h\e\z\h\f\t\6\w\9\g\8\5\d\j\l\r\0\x\m\4\o\u\4\h\r\9\t\w\o\0\d\i\n\2\y\b\p\7\i\a\4\c\n\4\r\f\r\y\q\g\o\a\f\5\9\3\r\3\8\i\m\u\1\d\n\7\1\s\u\s\t\1\n\a\x\o\g\i\b\z\3\x\u\2\t\m\m\y\3\d\w\2\h\e\w\u\j\t\l\0\7\m\6\7\k\4\6\5\l\0\w\l\6\u\i\g\a\5\8\u\2\v\j\y\v\m\5\h\j\e\g\k\k\r\d\d\d\e\s\s\d\o\u\h\z\5\d\g\b\m\2\c\y\z\j\y\g\8\q\8\0\c\v\c\u\s\d\0\a\9\b\t\0\c\k\5\k\a\5\u\o\y\r\b\j\u\1\x\w\5\4\i\s\v\o\r\0\c\9\h ]] 00:07:18.117 19:45:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:18.118 19:45:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ tc1r9cmuxmoz2agahiop1fe4xt3b7rz2tvxlm079du8hg2401rh50axawcfp4e98gyy7idpqlg9lvhsscqms7bihy10kai4p94nmoc9sacbftzu1s3wfh24g6mr851gbcqtduilmso963g40x8thvnun8ypct80fnob3mec01i6wv28bhw88ikzppj8u9jn7zevqf326dpk98n9l1a5mbs5qkqsqeisjpf68y7g7enif7zwu03cx00zb9n8bqa0fai2zd5ao5np8qa9s43wsner0jxrtr3jqburkzbnubdvsj3jls7t0zqnlfxismt9dwk6f8vd6srigp9vzje0j2jqkg49jl69jq23jb5dbvde8oud8z2ia99s7r37ako88a02sfjy1obat2yroezseedp4gw1vn592uz58y18qo1xzmg1wiq6zzh9pnq732wdbysl5qd53zyqbs9vbbf9u1u2jge3mic9068g838ervztwps5dg2knt3akhq8skh9nugf12suy90nzkuztlx6alezaultgv8db6apmzjzinwsir5osmlde0n2ojla7pmw04ut9m2a3ahhfqonivsm6g6e8aklaysyzhwi8dys1chepisxavdj7fngkhw11qoz7ls25b4758grtqbnk4uc70mgvqtj05ifm5i93cb1jmbs5hjqjuzu57cwgw45ug9ncsctxglajpp81qkpfmun0cqf53zqx9mtjsot5py12p7he2yhq01sfvr30u366ngo9u2lptttwxf36ixihr78fki1rtrm1wo8r04l1w4jxk2p619rqzedegvpwhezhft6w9g85djlr0xm4ou4hr9two0din2ybp7ia4cn4rfryqgoaf593r38imu1dn71sust1naxogibz3xu2tmmy3dw2hewujtl07m67k465l0wl6uiga58u2vjyvm5hjegkkrdddessdouhz5dgbm2cyzjyg8q80cvcusd0a9bt0ck5ka5uoyrbju1xw54isvor0c9h == \t\c\1\r\9\c\m\u\x\m\o\z\2\a\g\a\h\i\o\p\1\f\e\4\x\t\3\b\7\r\z\2\t\v\x\l\m\0\7\9\d\u\8\h\g\2\4\0\1\r\h\5\0\a\x\a\w\c\f\p\4\e\9\8\g\y\y\7\i\d\p\q\l\g\9\l\v\h\s\s\c\q\m\s\7\b\i\h\y\1\0\k\a\i\4\p\9\4\n\m\o\c\9\s\a\c\b\f\t\z\u\1\s\3\w\f\h\2\4\g\6\m\r\8\5\1\g\b\c\q\t\d\u\i\l\m\s\o\9\6\3\g\4\0\x\8\t\h\v\n\u\n\8\y\p\c\t\8\0\f\n\o\b\3\m\e\c\0\1\i\6\w\v\2\8\b\h\w\8\8\i\k\z\p\p\j\8\u\9\j\n\7\z\e\v\q\f\3\2\6\d\p\k\9\8\n\9\l\1\a\5\m\b\s\5\q\k\q\s\q\e\i\s\j\p\f\6\8\y\7\g\7\e\n\i\f\7\z\w\u\0\3\c\x\0\0\z\b\9\n\8\b\q\a\0\f\a\i\2\z\d\5\a\o\5\n\p\8\q\a\9\s\4\3\w\s\n\e\r\0\j\x\r\t\r\3\j\q\b\u\r\k\z\b\n\u\b\d\v\s\j\3\j\l\s\7\t\0\z\q\n\l\f\x\i\s\m\t\9\d\w\k\6\f\8\v\d\6\s\r\i\g\p\9\v\z\j\e\0\j\2\j\q\k\g\4\9\j\l\6\9\j\q\2\3\j\b\5\d\b\v\d\e\8\o\u\d\8\z\2\i\a\9\9\s\7\r\3\7\a\k\o\8\8\a\0\2\s\f\j\y\1\o\b\a\t\2\y\r\o\e\z\s\e\e\d\p\4\g\w\1\v\n\5\9\2\u\z\5\8\y\1\8\q\o\1\x\z\m\g\1\w\i\q\6\z\z\h\9\p\n\q\7\3\2\w\d\b\y\s\l\5\q\d\5\3\z\y\q\b\s\9\v\b\b\f\9\u\1\u\2\j\g\e\3\m\i\c\9\0\6\8\g\8\3\8\e\r\v\z\t\w\p\s\5\d\g\2\k\n\t\3\a\k\h\q\8\s\k\h\9\n\u\g\f\1\2\s\u\y\9\0\n\z\k\u\z\t\l\x\6\a\l\e\z\a\u\l\t\g\v\8\d\b\6\a\p\m\z\j\z\i\n\w\s\i\r\5\o\s\m\l\d\e\0\n\2\o\j\l\a\7\p\m\w\0\4\u\t\9\m\2\a\3\a\h\h\f\q\o\n\i\v\s\m\6\g\6\e\8\a\k\l\a\y\s\y\z\h\w\i\8\d\y\s\1\c\h\e\p\i\s\x\a\v\d\j\7\f\n\g\k\h\w\1\1\q\o\z\7\l\s\2\5\b\4\7\5\8\g\r\t\q\b\n\k\4\u\c\7\0\m\g\v\q\t\j\0\5\i\f\m\5\i\9\3\c\b\1\j\m\b\s\5\h\j\q\j\u\z\u\5\7\c\w\g\w\4\5\u\g\9\n\c\s\c\t\x\g\l\a\j\p\p\8\1\q\k\p\f\m\u\n\0\c\q\f\5\3\z\q\x\9\m\t\j\s\o\t\5\p\y\1\2\p\7\h\e\2\y\h\q\0\1\s\f\v\r\3\0\u\3\6\6\n\g\o\9\u\2\l\p\t\t\t\w\x\f\3\6\i\x\i\h\r\7\8\f\k\i\1\r\t\r\m\1\w\o\8\r\0\4\l\1\w\4\j\x\k\2\p\6\1\9\r\q\z\e\d\e\g\v\p\w\h\e\z\h\f\t\6\w\9\g\8\5\d\j\l\r\0\x\m\4\o\u\4\h\r\9\t\w\o\0\d\i\n\2\y\b\p\7\i\a\4\c\n\4\r\f\r\y\q\g\o\a\f\5\9\3\r\3\8\i\m\u\1\d\n\7\1\s\u\s\t\1\n\a\x\o\g\i\b\z\3\x\u\2\t\m\m\y\3\d\w\2\h\e\w\u\j\t\l\0\7\m\6\7\k\4\6\5\l\0\w\l\6\u\i\g\a\5\8\u\2\v\j\y\v\m\5\h\j\e\g\k\k\r\d\d\d\e\s\s\d\o\u\h\z\5\d\g\b\m\2\c\y\z\j\y\g\8\q\8\0\c\v\c\u\s\d\0\a\9\b\t\0\c\k\5\k\a\5\u\o\y\r\b\j\u\1\x\w\5\4\i\s\v\o\r\0\c\9\h ]] 00:07:18.118 19:45:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:18.685 19:45:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:18.685 19:45:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:18.685 19:45:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:18.685 19:45:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.685 [2024-07-15 19:45:12.708026] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:18.685 [2024-07-15 19:45:12.708137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64227 ] 00:07:18.685 { 00:07:18.685 "subsystems": [ 00:07:18.685 { 00:07:18.685 "subsystem": "bdev", 00:07:18.685 "config": [ 00:07:18.685 { 00:07:18.685 "params": { 00:07:18.685 "block_size": 512, 00:07:18.685 "num_blocks": 1048576, 00:07:18.685 "name": "malloc0" 00:07:18.685 }, 00:07:18.685 "method": "bdev_malloc_create" 00:07:18.685 }, 00:07:18.685 { 00:07:18.685 "params": { 00:07:18.685 "filename": "/dev/zram1", 00:07:18.685 "name": "uring0" 00:07:18.685 }, 00:07:18.685 "method": "bdev_uring_create" 00:07:18.685 }, 00:07:18.685 { 00:07:18.685 "method": "bdev_wait_for_examine" 00:07:18.685 } 00:07:18.685 ] 00:07:18.685 } 00:07:18.685 ] 00:07:18.685 } 00:07:18.685 [2024-07-15 19:45:12.845102] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.945 [2024-07-15 19:45:12.958462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.945 [2024-07-15 19:45:13.013874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.708  Copying: 153/512 [MB] (153 MBps) Copying: 317/512 [MB] (163 MBps) Copying: 475/512 [MB] (158 MBps) Copying: 512/512 [MB] (average 158 MBps) 00:07:22.708 00:07:22.708 19:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:22.708 19:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:22.708 19:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:22.708 19:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:22.709 19:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:22.709 19:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:22.709 19:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.709 19:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:22.709 [2024-07-15 19:45:16.927301] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:22.709 [2024-07-15 19:45:16.927664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64282 ] 00:07:22.709 { 00:07:22.709 "subsystems": [ 00:07:22.709 { 00:07:22.709 "subsystem": "bdev", 00:07:22.709 "config": [ 00:07:22.709 { 00:07:22.709 "params": { 00:07:22.709 "block_size": 512, 00:07:22.709 "num_blocks": 1048576, 00:07:22.709 "name": "malloc0" 00:07:22.709 }, 00:07:22.709 "method": "bdev_malloc_create" 00:07:22.709 }, 00:07:22.709 { 00:07:22.709 "params": { 00:07:22.709 "filename": "/dev/zram1", 00:07:22.709 "name": "uring0" 00:07:22.709 }, 00:07:22.709 "method": "bdev_uring_create" 00:07:22.709 }, 00:07:22.709 { 00:07:22.709 "params": { 00:07:22.709 "name": "uring0" 00:07:22.709 }, 00:07:22.709 "method": "bdev_uring_delete" 00:07:22.709 }, 00:07:22.709 { 00:07:22.709 "method": "bdev_wait_for_examine" 00:07:22.709 } 00:07:22.709 ] 00:07:22.709 } 00:07:22.709 ] 00:07:22.709 } 00:07:22.968 [2024-07-15 19:45:17.066000] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.968 [2024-07-15 19:45:17.167553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.226 [2024-07-15 19:45:17.222280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.795  Copying: 0/0 [B] (average 0 Bps) 00:07:23.795 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.795 19:45:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:23.795 { 00:07:23.795 "subsystems": [ 00:07:23.795 { 00:07:23.795 "subsystem": "bdev", 00:07:23.796 "config": [ 00:07:23.796 { 00:07:23.796 "params": { 00:07:23.796 "block_size": 512, 00:07:23.796 "num_blocks": 1048576, 00:07:23.796 "name": "malloc0" 00:07:23.796 }, 00:07:23.796 "method": "bdev_malloc_create" 00:07:23.796 }, 00:07:23.796 { 00:07:23.796 "params": { 00:07:23.796 "filename": "/dev/zram1", 00:07:23.796 "name": "uring0" 00:07:23.796 }, 00:07:23.796 "method": "bdev_uring_create" 00:07:23.796 }, 00:07:23.796 { 00:07:23.796 "params": { 00:07:23.796 "name": "uring0" 00:07:23.796 }, 00:07:23.796 "method": "bdev_uring_delete" 00:07:23.796 }, 00:07:23.796 { 00:07:23.796 "method": "bdev_wait_for_examine" 00:07:23.796 } 00:07:23.796 ] 00:07:23.796 } 00:07:23.796 ] 00:07:23.796 } 00:07:23.796 [2024-07-15 19:45:17.918561] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:23.796 [2024-07-15 19:45:17.918691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64309 ] 00:07:24.054 [2024-07-15 19:45:18.057262] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.054 [2024-07-15 19:45:18.172942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.054 [2024-07-15 19:45:18.228976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.313 [2024-07-15 19:45:18.431836] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:24.313 [2024-07-15 19:45:18.431891] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:24.313 [2024-07-15 19:45:18.431919] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:24.313 [2024-07-15 19:45:18.431929] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.571 [2024-07-15 19:45:18.752064] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:24.830 19:45:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:25.090 00:07:25.090 ************************************ 00:07:25.090 END TEST dd_uring_copy 00:07:25.090 ************************************ 00:07:25.090 real 0m15.236s 00:07:25.090 user 0m10.305s 00:07:25.090 sys 0m12.418s 00:07:25.090 19:45:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.090 19:45:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:25.090 19:45:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:25.090 00:07:25.090 real 0m15.368s 00:07:25.090 user 0m10.362s 00:07:25.090 sys 0m12.489s 00:07:25.090 ************************************ 00:07:25.090 END TEST spdk_dd_uring 00:07:25.090 ************************************ 00:07:25.090 19:45:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.090 19:45:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:25.090 19:45:19 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:25.090 19:45:19 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:25.090 19:45:19 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.090 19:45:19 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.090 19:45:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:25.090 ************************************ 00:07:25.090 START TEST spdk_dd_sparse 00:07:25.090 ************************************ 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:25.090 * Looking for test storage... 00:07:25.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:25.090 1+0 records in 00:07:25.090 1+0 records out 00:07:25.090 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00802297 s, 523 MB/s 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:25.090 1+0 records in 00:07:25.090 1+0 records out 00:07:25.090 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00764951 s, 548 MB/s 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:25.090 1+0 records in 00:07:25.090 1+0 records out 00:07:25.090 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00414621 s, 1.0 GB/s 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:25.090 ************************************ 00:07:25.090 START TEST dd_sparse_file_to_file 00:07:25.090 ************************************ 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:25.090 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:25.349 [2024-07-15 19:45:19.354923] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:25.349 [2024-07-15 19:45:19.355016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64398 ] 00:07:25.349 { 00:07:25.349 "subsystems": [ 00:07:25.349 { 00:07:25.349 "subsystem": "bdev", 00:07:25.349 "config": [ 00:07:25.349 { 00:07:25.349 "params": { 00:07:25.349 "block_size": 4096, 00:07:25.349 "filename": "dd_sparse_aio_disk", 00:07:25.349 "name": "dd_aio" 00:07:25.349 }, 00:07:25.349 "method": "bdev_aio_create" 00:07:25.349 }, 00:07:25.349 { 00:07:25.349 "params": { 00:07:25.349 "lvs_name": "dd_lvstore", 00:07:25.349 "bdev_name": "dd_aio" 00:07:25.349 }, 00:07:25.349 "method": "bdev_lvol_create_lvstore" 00:07:25.349 }, 00:07:25.349 { 00:07:25.349 "method": "bdev_wait_for_examine" 00:07:25.349 } 00:07:25.349 ] 00:07:25.349 } 00:07:25.349 ] 00:07:25.349 } 00:07:25.349 [2024-07-15 19:45:19.490964] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.349 [2024-07-15 19:45:19.587851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.607 [2024-07-15 19:45:19.643669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.866  Copying: 12/36 [MB] (average 1090 MBps) 00:07:25.866 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:25.866 ************************************ 00:07:25.866 END TEST dd_sparse_file_to_file 00:07:25.866 ************************************ 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:25.866 00:07:25.866 real 0m0.692s 00:07:25.866 user 0m0.432s 00:07:25.866 sys 0m0.345s 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.866 19:45:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:25.866 ************************************ 00:07:25.866 START TEST dd_sparse_file_to_bdev 00:07:25.866 ************************************ 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:25.866 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:25.866 [2024-07-15 19:45:20.101088] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:25.866 [2024-07-15 19:45:20.101833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64445 ] 00:07:26.124 { 00:07:26.124 "subsystems": [ 00:07:26.124 { 00:07:26.124 "subsystem": "bdev", 00:07:26.124 "config": [ 00:07:26.124 { 00:07:26.124 "params": { 00:07:26.124 "block_size": 4096, 00:07:26.124 "filename": "dd_sparse_aio_disk", 00:07:26.124 "name": "dd_aio" 00:07:26.124 }, 00:07:26.124 "method": "bdev_aio_create" 00:07:26.124 }, 00:07:26.124 { 00:07:26.124 "params": { 00:07:26.124 "lvs_name": "dd_lvstore", 00:07:26.124 "lvol_name": "dd_lvol", 00:07:26.124 "size_in_mib": 36, 00:07:26.124 "thin_provision": true 00:07:26.124 }, 00:07:26.124 "method": "bdev_lvol_create" 00:07:26.124 }, 00:07:26.124 { 00:07:26.124 "method": "bdev_wait_for_examine" 00:07:26.124 } 00:07:26.124 ] 00:07:26.124 } 00:07:26.124 ] 00:07:26.124 } 00:07:26.124 [2024-07-15 19:45:20.243904] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.383 [2024-07-15 19:45:20.382277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.383 [2024-07-15 19:45:20.441121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.642  Copying: 12/36 [MB] (average 400 MBps) 00:07:26.642 00:07:26.642 00:07:26.642 real 0m0.761s 00:07:26.642 user 0m0.494s 00:07:26.642 sys 0m0.393s 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.642 ************************************ 00:07:26.642 END TEST dd_sparse_file_to_bdev 00:07:26.642 ************************************ 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:26.642 ************************************ 00:07:26.642 START TEST dd_sparse_bdev_to_file 00:07:26.642 ************************************ 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:26.642 19:45:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:26.901 [2024-07-15 19:45:20.914288] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:26.901 [2024-07-15 19:45:20.914397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64479 ] 00:07:26.901 { 00:07:26.901 "subsystems": [ 00:07:26.901 { 00:07:26.901 "subsystem": "bdev", 00:07:26.901 "config": [ 00:07:26.901 { 00:07:26.901 "params": { 00:07:26.901 "block_size": 4096, 00:07:26.901 "filename": "dd_sparse_aio_disk", 00:07:26.901 "name": "dd_aio" 00:07:26.901 }, 00:07:26.901 "method": "bdev_aio_create" 00:07:26.901 }, 00:07:26.901 { 00:07:26.901 "method": "bdev_wait_for_examine" 00:07:26.901 } 00:07:26.901 ] 00:07:26.901 } 00:07:26.901 ] 00:07:26.901 } 00:07:26.901 [2024-07-15 19:45:21.052888] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.213 [2024-07-15 19:45:21.221898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.213 [2024-07-15 19:45:21.304891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.780  Copying: 12/36 [MB] (average 923 MBps) 00:07:27.780 00:07:27.780 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:27.780 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:27.780 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:27.781 ************************************ 00:07:27.781 END TEST dd_sparse_bdev_to_file 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:27.781 00:07:27.781 real 0m0.915s 00:07:27.781 user 0m0.603s 00:07:27.781 sys 0m0.469s 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:27.781 ************************************ 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:27.781 ************************************ 00:07:27.781 END TEST spdk_dd_sparse 00:07:27.781 ************************************ 00:07:27.781 00:07:27.781 real 0m2.671s 00:07:27.781 user 0m1.640s 00:07:27.781 sys 0m1.394s 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.781 19:45:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:27.781 19:45:21 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:27.781 19:45:21 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:27.781 19:45:21 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.781 19:45:21 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.781 19:45:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:27.781 ************************************ 00:07:27.781 START TEST spdk_dd_negative 00:07:27.781 ************************************ 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:27.781 * Looking for test storage... 00:07:27.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.781 19:45:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.781 ************************************ 00:07:27.781 START TEST dd_invalid_arguments 00:07:27.781 ************************************ 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.781 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:28.040 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:28.040 00:07:28.040 CPU options: 00:07:28.040 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:28.040 (like [0,1,10]) 00:07:28.040 --lcores lcore to CPU mapping list. The list is in the format: 00:07:28.040 [<,lcores[@CPUs]>...] 00:07:28.040 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:28.040 Within the group, '-' is used for range separator, 00:07:28.040 ',' is used for single number separator. 00:07:28.040 '( )' can be omitted for single element group, 00:07:28.040 '@' can be omitted if cpus and lcores have the same value 00:07:28.040 --disable-cpumask-locks Disable CPU core lock files. 00:07:28.040 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:28.040 pollers in the app support interrupt mode) 00:07:28.040 -p, --main-core main (primary) core for DPDK 00:07:28.040 00:07:28.040 Configuration options: 00:07:28.040 -c, --config, --json JSON config file 00:07:28.040 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:28.040 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:28.040 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:28.040 --rpcs-allowed comma-separated list of permitted RPCS 00:07:28.040 --json-ignore-init-errors don't exit on invalid config entry 00:07:28.040 00:07:28.040 Memory options: 00:07:28.040 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:28.040 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:28.040 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:28.040 -R, --huge-unlink unlink huge files after initialization 00:07:28.040 -n, --mem-channels number of memory channels used for DPDK 00:07:28.040 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:28.040 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:28.040 --no-huge run without using hugepages 00:07:28.040 --enforce-numa enforce NUMA allocations from the correct socket 00:07:28.040 -i, --shm-id shared memory ID (optional) 00:07:28.040 -g, --single-file-segments force creating just one hugetlbfs file 00:07:28.040 00:07:28.040 PCI options: 00:07:28.040 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:28.040 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:28.040 -u, --no-pci disable PCI access 00:07:28.040 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:28.040 00:07:28.040 Log options: 00:07:28.040 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:28.040 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:28.040 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:28.040 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:28.040 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:28.040 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:28.040 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:28.041 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:28.041 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:28.041 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:28.041 virtio_vfio_user, vmd) 00:07:28.041 --silence-noticelog disable notice level logging to stderr 00:07:28.041 00:07:28.041 Trace options: 00:07:28.041 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:28.041 setting 0 to disable trace (default 32768) 00:07:28.041 Tracepoints vary in size and can use more than one trace entry. 00:07:28.041 -e, --tpoint-group [:] 00:07:28.041 group_name - tracep/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:28.041 [2024-07-15 19:45:22.063952] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:28.041 oint group name for spdk trace buffers (bdev, ftl, 00:07:28.041 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:28.041 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:28.041 a tracepoint group. First tpoint inside a group can be enabled by 00:07:28.041 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:28.041 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:28.041 in /include/spdk_internal/trace_defs.h 00:07:28.041 00:07:28.041 Other options: 00:07:28.041 -h, --help show this usage 00:07:28.041 -v, --version print SPDK version 00:07:28.041 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:28.041 --env-context Opaque context for use of the env implementation 00:07:28.041 00:07:28.041 Application specific: 00:07:28.041 [--------- DD Options ---------] 00:07:28.041 --if Input file. Must specify either --if or --ib. 00:07:28.041 --ib Input bdev. Must specifier either --if or --ib 00:07:28.041 --of Output file. Must specify either --of or --ob. 00:07:28.041 --ob Output bdev. Must specify either --of or --ob. 00:07:28.041 --iflag Input file flags. 00:07:28.041 --oflag Output file flags. 00:07:28.041 --bs I/O unit size (default: 4096) 00:07:28.041 --qd Queue depth (default: 2) 00:07:28.041 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:28.041 --skip Skip this many I/O units at start of input. (default: 0) 00:07:28.041 --seek Skip this many I/O units at start of output. (default: 0) 00:07:28.041 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:28.041 --sparse Enable hole skipping in input target 00:07:28.041 Available iflag and oflag values: 00:07:28.041 append - append mode 00:07:28.041 direct - use direct I/O for data 00:07:28.041 directory - fail unless a directory 00:07:28.041 dsync - use synchronized I/O for data 00:07:28.041 noatime - do not update access time 00:07:28.041 noctty - do not assign controlling terminal from file 00:07:28.041 nofollow - do not follow symlinks 00:07:28.041 nonblock - use non-blocking I/O 00:07:28.041 sync - use synchronized I/O for data and metadata 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.041 00:07:28.041 real 0m0.076s 00:07:28.041 user 0m0.045s 00:07:28.041 sys 0m0.028s 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:28.041 ************************************ 00:07:28.041 END TEST dd_invalid_arguments 00:07:28.041 ************************************ 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.041 ************************************ 00:07:28.041 START TEST dd_double_input 00:07:28.041 ************************************ 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:28.041 [2024-07-15 19:45:22.191104] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.041 00:07:28.041 real 0m0.080s 00:07:28.041 user 0m0.049s 00:07:28.041 sys 0m0.027s 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:28.041 ************************************ 00:07:28.041 END TEST dd_double_input 00:07:28.041 ************************************ 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.041 ************************************ 00:07:28.041 START TEST dd_double_output 00:07:28.041 ************************************ 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.041 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:28.299 [2024-07-15 19:45:22.320499] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.300 ************************************ 00:07:28.300 END TEST dd_double_output 00:07:28.300 ************************************ 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.300 00:07:28.300 real 0m0.083s 00:07:28.300 user 0m0.052s 00:07:28.300 sys 0m0.029s 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.300 ************************************ 00:07:28.300 START TEST dd_no_input 00:07:28.300 ************************************ 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:28.300 [2024-07-15 19:45:22.453506] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:28.300 ************************************ 00:07:28.300 END TEST dd_no_input 00:07:28.300 ************************************ 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.300 00:07:28.300 real 0m0.076s 00:07:28.300 user 0m0.048s 00:07:28.300 sys 0m0.026s 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.300 ************************************ 00:07:28.300 START TEST dd_no_output 00:07:28.300 ************************************ 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.300 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.558 [2024-07-15 19:45:22.581627] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.558 00:07:28.558 real 0m0.078s 00:07:28.558 user 0m0.048s 00:07:28.558 sys 0m0.029s 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.558 ************************************ 00:07:28.558 END TEST dd_no_output 00:07:28.558 ************************************ 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.558 ************************************ 00:07:28.558 START TEST dd_wrong_blocksize 00:07:28.558 ************************************ 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.558 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:28.559 [2024-07-15 19:45:22.710247] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:28.559 ************************************ 00:07:28.559 END TEST dd_wrong_blocksize 00:07:28.559 ************************************ 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.559 00:07:28.559 real 0m0.076s 00:07:28.559 user 0m0.046s 00:07:28.559 sys 0m0.028s 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.559 ************************************ 00:07:28.559 START TEST dd_smaller_blocksize 00:07:28.559 ************************************ 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.559 19:45:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:28.817 [2024-07-15 19:45:22.836984] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:28.817 [2024-07-15 19:45:22.837102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64703 ] 00:07:28.817 [2024-07-15 19:45:22.979256] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.077 [2024-07-15 19:45:23.107293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.077 [2024-07-15 19:45:23.165092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.335 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:29.594 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:29.594 [2024-07-15 19:45:23.739387] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:29.594 [2024-07-15 19:45:23.739475] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.853 [2024-07-15 19:45:23.859423] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.853 19:45:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:29.853 19:45:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.853 19:45:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:29.853 ************************************ 00:07:29.853 END TEST dd_smaller_blocksize 00:07:29.853 ************************************ 00:07:29.853 19:45:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:29.853 19:45:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:29.853 19:45:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.853 00:07:29.853 real 0m1.187s 00:07:29.853 user 0m0.499s 00:07:29.853 sys 0m0.579s 00:07:29.853 19:45:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.853 19:45:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.853 ************************************ 00:07:29.853 START TEST dd_invalid_count 00:07:29.853 ************************************ 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:29.853 [2024-07-15 19:45:24.078162] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.853 00:07:29.853 real 0m0.069s 00:07:29.853 user 0m0.039s 00:07:29.853 sys 0m0.029s 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.853 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:29.853 ************************************ 00:07:29.853 END TEST dd_invalid_count 00:07:29.853 ************************************ 00:07:30.113 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:30.113 19:45:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:30.113 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.113 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.113 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:30.113 ************************************ 00:07:30.113 START TEST dd_invalid_oflag 00:07:30.114 ************************************ 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:30.114 [2024-07-15 19:45:24.201559] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:30.114 ************************************ 00:07:30.114 END TEST dd_invalid_oflag 00:07:30.114 ************************************ 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:30.114 00:07:30.114 real 0m0.076s 00:07:30.114 user 0m0.049s 00:07:30.114 sys 0m0.026s 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:30.114 ************************************ 00:07:30.114 START TEST dd_invalid_iflag 00:07:30.114 ************************************ 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:30.114 [2024-07-15 19:45:24.333676] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:30.114 ************************************ 00:07:30.114 END TEST dd_invalid_iflag 00:07:30.114 ************************************ 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:30.114 00:07:30.114 real 0m0.078s 00:07:30.114 user 0m0.049s 00:07:30.114 sys 0m0.027s 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.114 19:45:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:30.373 ************************************ 00:07:30.373 START TEST dd_unknown_flag 00:07:30.373 ************************************ 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.373 19:45:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:30.373 [2024-07-15 19:45:24.459064] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:30.373 [2024-07-15 19:45:24.459157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64800 ] 00:07:30.373 [2024-07-15 19:45:24.595060] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.632 [2024-07-15 19:45:24.707122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.633 [2024-07-15 19:45:24.761676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.633 [2024-07-15 19:45:24.797439] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:30.633 [2024-07-15 19:45:24.797487] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.633 [2024-07-15 19:45:24.797537] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:30.633 [2024-07-15 19:45:24.797551] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.633 [2024-07-15 19:45:24.797779] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:30.633 [2024-07-15 19:45:24.797796] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.633 [2024-07-15 19:45:24.797842] app.c:1045:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:30.633 [2024-07-15 19:45:24.797852] app.c:1045:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:30.892 [2024-07-15 19:45:24.915915] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:30.892 ************************************ 00:07:30.892 END TEST dd_unknown_flag 00:07:30.892 ************************************ 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:30.892 00:07:30.892 real 0m0.634s 00:07:30.892 user 0m0.384s 00:07:30.892 sys 0m0.158s 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:30.892 ************************************ 00:07:30.892 START TEST dd_invalid_json 00:07:30.892 ************************************ 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.892 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:31.177 [2024-07-15 19:45:25.145568] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:31.177 [2024-07-15 19:45:25.145660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64829 ] 00:07:31.177 [2024-07-15 19:45:25.282337] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.177 [2024-07-15 19:45:25.406907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.178 [2024-07-15 19:45:25.406975] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:31.178 [2024-07-15 19:45:25.406997] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:31.178 [2024-07-15 19:45:25.407010] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.178 [2024-07-15 19:45:25.407076] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:31.436 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.437 00:07:31.437 real 0m0.431s 00:07:31.437 user 0m0.239s 00:07:31.437 sys 0m0.088s 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.437 ************************************ 00:07:31.437 END TEST dd_invalid_json 00:07:31.437 ************************************ 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:31.437 ************************************ 00:07:31.437 END TEST spdk_dd_negative 00:07:31.437 ************************************ 00:07:31.437 00:07:31.437 real 0m3.658s 00:07:31.437 user 0m1.760s 00:07:31.437 sys 0m1.529s 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.437 19:45:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:31.437 19:45:25 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:31.437 ************************************ 00:07:31.437 END TEST spdk_dd 00:07:31.437 ************************************ 00:07:31.437 00:07:31.437 real 1m20.275s 00:07:31.437 user 0m52.217s 00:07:31.437 sys 0m34.458s 00:07:31.437 19:45:25 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.437 19:45:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:31.437 19:45:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:31.437 19:45:25 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:31.437 19:45:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:31.437 19:45:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:31.437 19:45:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.437 19:45:25 -- common/autotest_common.sh@10 -- # set +x 00:07:31.437 19:45:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:31.696 19:45:25 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:31.696 19:45:25 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:31.696 19:45:25 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:31.697 19:45:25 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:31.697 19:45:25 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:31.697 19:45:25 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:31.697 19:45:25 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:31.697 19:45:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.697 19:45:25 -- common/autotest_common.sh@10 -- # set +x 00:07:31.697 ************************************ 00:07:31.697 START TEST nvmf_tcp 00:07:31.697 ************************************ 00:07:31.697 19:45:25 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:31.697 * Looking for test storage... 00:07:31.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.697 19:45:25 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.697 19:45:25 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.697 19:45:25 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.697 19:45:25 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.697 19:45:25 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.697 19:45:25 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.697 19:45:25 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:31.697 19:45:25 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:31.697 19:45:25 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.697 19:45:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:31.697 19:45:25 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:31.697 19:45:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:31.697 19:45:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.697 19:45:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.697 ************************************ 00:07:31.697 START TEST nvmf_host_management 00:07:31.697 ************************************ 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:31.697 * Looking for test storage... 00:07:31.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:31.697 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:31.698 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:31.957 Cannot find device "nvmf_init_br" 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:31.957 Cannot find device "nvmf_tgt_br" 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.957 Cannot find device "nvmf_tgt_br2" 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:31.957 Cannot find device "nvmf_init_br" 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:31.957 Cannot find device "nvmf_tgt_br" 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:31.957 19:45:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:31.957 Cannot find device "nvmf_tgt_br2" 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:31.957 Cannot find device "nvmf_br" 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:31.957 Cannot find device "nvmf_init_if" 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:31.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:31.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.957 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:32.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:07:32.217 00:07:32.217 --- 10.0.0.2 ping statistics --- 00:07:32.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.217 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:32.217 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:32.217 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:07:32.217 00:07:32.217 --- 10.0.0.3 ping statistics --- 00:07:32.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.217 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:32.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:07:32.217 00:07:32.217 --- 10.0.0.1 ping statistics --- 00:07:32.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.217 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65088 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65088 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65088 ']' 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.217 19:45:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.217 [2024-07-15 19:45:26.392492] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:32.217 [2024-07-15 19:45:26.392593] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.515 [2024-07-15 19:45:26.535169] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.515 [2024-07-15 19:45:26.662805] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.515 [2024-07-15 19:45:26.662883] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.515 [2024-07-15 19:45:26.662911] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.515 [2024-07-15 19:45:26.662919] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.515 [2024-07-15 19:45:26.662927] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.515 [2024-07-15 19:45:26.663140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.515 [2024-07-15 19:45:26.664134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.515 [2024-07-15 19:45:26.664341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:32.515 [2024-07-15 19:45:26.664342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.515 [2024-07-15 19:45:26.722038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 [2024-07-15 19:45:27.461192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 Malloc0 00:07:33.480 [2024-07-15 19:45:27.539787] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65153 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65153 /var/tmp/bdevperf.sock 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65153 ']' 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:33.480 { 00:07:33.480 "params": { 00:07:33.480 "name": "Nvme$subsystem", 00:07:33.480 "trtype": "$TEST_TRANSPORT", 00:07:33.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.480 "adrfam": "ipv4", 00:07:33.480 "trsvcid": "$NVMF_PORT", 00:07:33.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.480 "hdgst": ${hdgst:-false}, 00:07:33.480 "ddgst": ${ddgst:-false} 00:07:33.480 }, 00:07:33.480 "method": "bdev_nvme_attach_controller" 00:07:33.480 } 00:07:33.480 EOF 00:07:33.480 )") 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:33.480 19:45:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:33.480 "params": { 00:07:33.480 "name": "Nvme0", 00:07:33.480 "trtype": "tcp", 00:07:33.480 "traddr": "10.0.0.2", 00:07:33.480 "adrfam": "ipv4", 00:07:33.480 "trsvcid": "4420", 00:07:33.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:33.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:33.480 "hdgst": false, 00:07:33.480 "ddgst": false 00:07:33.480 }, 00:07:33.480 "method": "bdev_nvme_attach_controller" 00:07:33.480 }' 00:07:33.480 [2024-07-15 19:45:27.633547] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:33.480 [2024-07-15 19:45:27.633628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65153 ] 00:07:33.739 [2024-07-15 19:45:27.775442] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.739 [2024-07-15 19:45:27.889413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.739 [2024-07-15 19:45:27.957925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.998 Running I/O for 10 seconds... 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.566 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.566 [2024-07-15 19:45:28.700862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.700912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.700941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.700950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.700959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.700968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.700976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.700985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.700993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229b70 is same with the state(5) to be set 00:07:34.567 [2024-07-15 19:45:28.701620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.567 [2024-07-15 19:45:28.701952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.567 [2024-07-15 19:45:28.701961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.701973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.701988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.568 [2024-07-15 19:45:28.702910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.568 [2024-07-15 19:45:28.702921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.569 [2024-07-15 19:45:28.702951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.702964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.569 [2024-07-15 19:45:28.702973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.702985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.569 [2024-07-15 19:45:28.702995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.703006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.569 [2024-07-15 19:45:28.703015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.703027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.569 [2024-07-15 19:45:28.703036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.703047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.569 [2024-07-15 19:45:28.703056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.703067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.569 [2024-07-15 19:45:28.703077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.703088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.569 [2024-07-15 19:45:28.703097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.703109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.569 [2024-07-15 19:45:28.703118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.703129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12af1c0 is same with the state(5) to be set 00:07:34.569 [2024-07-15 19:45:28.703200] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12af1c0 was disconnected and freed. reset controller. 00:07:34.569 [2024-07-15 19:45:28.704760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:34.569 task offset: 114688 on job bdev=Nvme0n1 fails 00:07:34.569 00:07:34.569 Latency(us) 00:07:34.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.569 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:34.569 Job: Nvme0n1 ended in about 0.63 seconds with error 00:07:34.569 Verification LBA range: start 0x0 length 0x400 00:07:34.569 Nvme0n1 : 0.63 1423.96 89.00 101.71 0.00 40640.10 3559.80 45279.42 00:07:34.569 =================================================================================================================== 00:07:34.569 Total : 1423.96 89.00 101.71 0.00 40640.10 3559.80 45279.42 00:07:34.569 [2024-07-15 19:45:28.707488] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.569 [2024-07-15 19:45:28.707617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a6ef0 (9): Bad file descriptor 00:07:34.569 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.569 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:34.569 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.569 [2024-07-15 19:45:28.710298] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not al 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.569 low host 'nqn.2016-06.io.spdk:host0' 00:07:34.569 [2024-07-15 19:45:28.710612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:34.569 [2024-07-15 19:45:28.710775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.569 [2024-07-15 19:45:28.710937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:34.569 [2024-07-15 19:45:28.711076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:34.569 [2024-07-15 19:45:28.711221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:34.569 [2024-07-15 19:45:28.711369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12a6ef0 00:07:34.569 [2024-07-15 19:45:28.711505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a6ef0 (9): Bad file descriptor 00:07:34.569 [2024-07-15 19:45:28.711649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:34.569 [2024-07-15 19:45:28.711773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:34.569 [2024-07-15 19:45:28.711920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:34.569 [2024-07-15 19:45:28.712045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.569 19:45:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.569 19:45:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65153 00:07:35.505 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65153) - No such process 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:35.505 { 00:07:35.505 "params": { 00:07:35.505 "name": "Nvme$subsystem", 00:07:35.505 "trtype": "$TEST_TRANSPORT", 00:07:35.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.505 "adrfam": "ipv4", 00:07:35.505 "trsvcid": "$NVMF_PORT", 00:07:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.505 "hdgst": ${hdgst:-false}, 00:07:35.505 "ddgst": ${ddgst:-false} 00:07:35.505 }, 00:07:35.505 "method": "bdev_nvme_attach_controller" 00:07:35.505 } 00:07:35.505 EOF 00:07:35.505 )") 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:35.505 19:45:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:35.505 "params": { 00:07:35.505 "name": "Nvme0", 00:07:35.505 "trtype": "tcp", 00:07:35.505 "traddr": "10.0.0.2", 00:07:35.505 "adrfam": "ipv4", 00:07:35.505 "trsvcid": "4420", 00:07:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.505 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:35.505 "hdgst": false, 00:07:35.505 "ddgst": false 00:07:35.505 }, 00:07:35.505 "method": "bdev_nvme_attach_controller" 00:07:35.505 }' 00:07:35.764 [2024-07-15 19:45:29.782376] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:35.764 [2024-07-15 19:45:29.782476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65190 ] 00:07:35.764 [2024-07-15 19:45:29.915133] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.023 [2024-07-15 19:45:30.025019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.023 [2024-07-15 19:45:30.087579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.023 Running I/O for 1 seconds... 00:07:37.399 00:07:37.399 Latency(us) 00:07:37.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.399 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:37.399 Verification LBA range: start 0x0 length 0x400 00:07:37.399 Nvme0n1 : 1.04 1538.50 96.16 0.00 0.00 40790.23 4587.52 37653.41 00:07:37.399 =================================================================================================================== 00:07:37.399 Total : 1538.50 96.16 0.00 0.00 40790.23 4587.52 37653.41 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.399 rmmod nvme_tcp 00:07:37.399 rmmod nvme_fabrics 00:07:37.399 rmmod nvme_keyring 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65088 ']' 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65088 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65088 ']' 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65088 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65088 00:07:37.399 killing process with pid 65088 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65088' 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65088 00:07:37.399 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65088 00:07:37.657 [2024-07-15 19:45:31.874532] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:37.917 00:07:37.917 real 0m6.120s 00:07:37.917 user 0m23.650s 00:07:37.917 sys 0m1.541s 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.917 ************************************ 00:07:37.917 END TEST nvmf_host_management 00:07:37.917 ************************************ 00:07:37.917 19:45:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.917 19:45:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:37.917 19:45:31 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:37.917 19:45:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.917 19:45:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.917 19:45:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.917 ************************************ 00:07:37.917 START TEST nvmf_lvol 00:07:37.917 ************************************ 00:07:37.917 19:45:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:37.917 * Looking for test storage... 00:07:37.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:37.917 19:45:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:37.917 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:37.917 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.917 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:37.918 Cannot find device "nvmf_tgt_br" 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:37.918 Cannot find device "nvmf_tgt_br2" 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:37.918 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:38.178 Cannot find device "nvmf_tgt_br" 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:38.178 Cannot find device "nvmf_tgt_br2" 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:38.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:38.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:38.178 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:38.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:07:38.438 00:07:38.438 --- 10.0.0.2 ping statistics --- 00:07:38.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.438 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:38.438 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:38.438 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:07:38.438 00:07:38.438 --- 10.0.0.3 ping statistics --- 00:07:38.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.438 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:38.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:38.438 00:07:38.438 --- 10.0.0.1 ping statistics --- 00:07:38.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.438 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65405 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65405 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65405 ']' 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.438 19:45:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.439 [2024-07-15 19:45:32.517766] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:38.439 [2024-07-15 19:45:32.517846] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.439 [2024-07-15 19:45:32.651845] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.698 [2024-07-15 19:45:32.764283] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.698 [2024-07-15 19:45:32.764542] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.698 [2024-07-15 19:45:32.764651] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.698 [2024-07-15 19:45:32.764664] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.698 [2024-07-15 19:45:32.764672] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.698 [2024-07-15 19:45:32.765115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.698 [2024-07-15 19:45:32.765330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.698 [2024-07-15 19:45:32.765384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.698 [2024-07-15 19:45:32.821497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.264 19:45:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.264 19:45:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:39.264 19:45:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.264 19:45:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.264 19:45:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:39.522 19:45:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.522 19:45:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:39.780 [2024-07-15 19:45:33.788082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.780 19:45:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:40.038 19:45:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:40.039 19:45:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:40.297 19:45:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:40.297 19:45:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:40.555 19:45:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:40.813 19:45:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8ad35c5b-6699-4753-a68f-9e0c41324ed0 00:07:40.813 19:45:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8ad35c5b-6699-4753-a68f-9e0c41324ed0 lvol 20 00:07:41.073 19:45:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4113f2d4-c6c3-4481-ae24-ead85d407ed8 00:07:41.073 19:45:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:41.337 19:45:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4113f2d4-c6c3-4481-ae24-ead85d407ed8 00:07:41.602 19:45:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:41.602 [2024-07-15 19:45:35.843130] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.860 19:45:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.860 19:45:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:41.860 19:45:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65482 00:07:41.860 19:45:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:43.236 19:45:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4113f2d4-c6c3-4481-ae24-ead85d407ed8 MY_SNAPSHOT 00:07:43.237 19:45:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a7569d27-7458-498a-8d14-021717ff2dd0 00:07:43.237 19:45:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4113f2d4-c6c3-4481-ae24-ead85d407ed8 30 00:07:43.495 19:45:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone a7569d27-7458-498a-8d14-021717ff2dd0 MY_CLONE 00:07:43.754 19:45:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=056fec5c-c6a7-4226-aee7-4a8666837d1f 00:07:43.754 19:45:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 056fec5c-c6a7-4226-aee7-4a8666837d1f 00:07:44.318 19:45:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65482 00:07:52.457 Initializing NVMe Controllers 00:07:52.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:52.457 Controller IO queue size 128, less than required. 00:07:52.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:52.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:52.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:52.457 Initialization complete. Launching workers. 00:07:52.457 ======================================================== 00:07:52.457 Latency(us) 00:07:52.457 Device Information : IOPS MiB/s Average min max 00:07:52.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8610.10 33.63 14877.41 3095.16 108544.17 00:07:52.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8326.50 32.53 15378.67 3809.99 107406.33 00:07:52.457 ======================================================== 00:07:52.457 Total : 16936.60 66.16 15123.85 3095.16 108544.17 00:07:52.457 00:07:52.457 19:45:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:52.716 19:45:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4113f2d4-c6c3-4481-ae24-ead85d407ed8 00:07:52.716 19:45:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ad35c5b-6699-4753-a68f-9e0c41324ed0 00:07:52.975 19:45:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:52.975 19:45:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:52.975 19:45:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:52.975 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.975 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.233 rmmod nvme_tcp 00:07:53.233 rmmod nvme_fabrics 00:07:53.233 rmmod nvme_keyring 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65405 ']' 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65405 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65405 ']' 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65405 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65405 00:07:53.233 killing process with pid 65405 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65405' 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65405 00:07:53.233 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65405 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:53.491 ************************************ 00:07:53.491 END TEST nvmf_lvol 00:07:53.491 ************************************ 00:07:53.491 00:07:53.491 real 0m15.649s 00:07:53.491 user 1m4.948s 00:07:53.491 sys 0m4.208s 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.491 19:45:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:53.491 19:45:47 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:53.491 19:45:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.491 19:45:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.491 19:45:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.491 ************************************ 00:07:53.491 START TEST nvmf_lvs_grow 00:07:53.491 ************************************ 00:07:53.491 19:45:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:53.750 * Looking for test storage... 00:07:53.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:53.750 Cannot find device "nvmf_tgt_br" 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.750 Cannot find device "nvmf_tgt_br2" 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:53.750 Cannot find device "nvmf_tgt_br" 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:53.750 Cannot find device "nvmf_tgt_br2" 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.750 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:54.009 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:54.009 19:45:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:54.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:07:54.009 00:07:54.009 --- 10.0.0.2 ping statistics --- 00:07:54.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.009 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:54.009 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:54.009 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:07:54.009 00:07:54.009 --- 10.0.0.3 ping statistics --- 00:07:54.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.009 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:54.009 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:54.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:54.009 00:07:54.009 --- 10.0.0.1 ping statistics --- 00:07:54.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.010 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65803 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65803 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65803 ']' 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.010 19:45:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.010 [2024-07-15 19:45:48.185068] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:54.010 [2024-07-15 19:45:48.185166] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.268 [2024-07-15 19:45:48.321583] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.268 [2024-07-15 19:45:48.429561] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.268 [2024-07-15 19:45:48.429608] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.268 [2024-07-15 19:45:48.429619] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.268 [2024-07-15 19:45:48.429627] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.268 [2024-07-15 19:45:48.429634] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.268 [2024-07-15 19:45:48.429655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.268 [2024-07-15 19:45:48.486386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.204 19:45:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.204 19:45:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:55.204 19:45:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.204 19:45:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.204 19:45:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.204 19:45:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.204 19:45:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:55.462 [2024-07-15 19:45:49.522016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.463 ************************************ 00:07:55.463 START TEST lvs_grow_clean 00:07:55.463 ************************************ 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:55.463 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.721 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:55.721 19:45:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:55.979 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ee59787c-87b3-440f-94f6-8307715d550d 00:07:55.979 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee59787c-87b3-440f-94f6-8307715d550d 00:07:55.979 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:56.237 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:56.237 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:56.237 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ee59787c-87b3-440f-94f6-8307715d550d lvol 150 00:07:56.507 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0eb67fb3-237e-422f-9028-eaae11b7e31e 00:07:56.507 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:56.507 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:56.782 [2024-07-15 19:45:50.926274] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:56.782 [2024-07-15 19:45:50.926377] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:56.782 true 00:07:56.783 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee59787c-87b3-440f-94f6-8307715d550d 00:07:56.783 19:45:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:57.040 19:45:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:57.040 19:45:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.299 19:45:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0eb67fb3-237e-422f-9028-eaae11b7e31e 00:07:57.558 19:45:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:57.816 [2024-07-15 19:45:51.951072] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.816 19:45:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:58.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65891 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65891 /var/tmp/bdevperf.sock 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65891 ']' 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.075 19:45:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:58.075 [2024-07-15 19:45:52.276714] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:58.075 [2024-07-15 19:45:52.277217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65891 ] 00:07:58.333 [2024-07-15 19:45:52.413980] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.333 [2024-07-15 19:45:52.539067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.592 [2024-07-15 19:45:52.597707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.160 19:45:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.160 19:45:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:59.160 19:45:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:59.419 Nvme0n1 00:07:59.419 19:45:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:59.677 [ 00:07:59.677 { 00:07:59.677 "name": "Nvme0n1", 00:07:59.677 "aliases": [ 00:07:59.677 "0eb67fb3-237e-422f-9028-eaae11b7e31e" 00:07:59.677 ], 00:07:59.677 "product_name": "NVMe disk", 00:07:59.677 "block_size": 4096, 00:07:59.677 "num_blocks": 38912, 00:07:59.677 "uuid": "0eb67fb3-237e-422f-9028-eaae11b7e31e", 00:07:59.677 "assigned_rate_limits": { 00:07:59.677 "rw_ios_per_sec": 0, 00:07:59.677 "rw_mbytes_per_sec": 0, 00:07:59.677 "r_mbytes_per_sec": 0, 00:07:59.677 "w_mbytes_per_sec": 0 00:07:59.677 }, 00:07:59.677 "claimed": false, 00:07:59.677 "zoned": false, 00:07:59.677 "supported_io_types": { 00:07:59.677 "read": true, 00:07:59.677 "write": true, 00:07:59.677 "unmap": true, 00:07:59.677 "flush": true, 00:07:59.677 "reset": true, 00:07:59.677 "nvme_admin": true, 00:07:59.677 "nvme_io": true, 00:07:59.677 "nvme_io_md": false, 00:07:59.677 "write_zeroes": true, 00:07:59.677 "zcopy": false, 00:07:59.677 "get_zone_info": false, 00:07:59.677 "zone_management": false, 00:07:59.677 "zone_append": false, 00:07:59.677 "compare": true, 00:07:59.677 "compare_and_write": true, 00:07:59.677 "abort": true, 00:07:59.677 "seek_hole": false, 00:07:59.677 "seek_data": false, 00:07:59.677 "copy": true, 00:07:59.677 "nvme_iov_md": false 00:07:59.677 }, 00:07:59.677 "memory_domains": [ 00:07:59.677 { 00:07:59.677 "dma_device_id": "system", 00:07:59.677 "dma_device_type": 1 00:07:59.677 } 00:07:59.677 ], 00:07:59.677 "driver_specific": { 00:07:59.677 "nvme": [ 00:07:59.677 { 00:07:59.677 "trid": { 00:07:59.677 "trtype": "TCP", 00:07:59.677 "adrfam": "IPv4", 00:07:59.677 "traddr": "10.0.0.2", 00:07:59.677 "trsvcid": "4420", 00:07:59.677 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:59.677 }, 00:07:59.677 "ctrlr_data": { 00:07:59.677 "cntlid": 1, 00:07:59.677 "vendor_id": "0x8086", 00:07:59.677 "model_number": "SPDK bdev Controller", 00:07:59.677 "serial_number": "SPDK0", 00:07:59.677 "firmware_revision": "24.09", 00:07:59.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.677 "oacs": { 00:07:59.677 "security": 0, 00:07:59.677 "format": 0, 00:07:59.677 "firmware": 0, 00:07:59.677 "ns_manage": 0 00:07:59.677 }, 00:07:59.677 "multi_ctrlr": true, 00:07:59.677 "ana_reporting": false 00:07:59.677 }, 00:07:59.678 "vs": { 00:07:59.678 "nvme_version": "1.3" 00:07:59.678 }, 00:07:59.678 "ns_data": { 00:07:59.678 "id": 1, 00:07:59.678 "can_share": true 00:07:59.678 } 00:07:59.678 } 00:07:59.678 ], 00:07:59.678 "mp_policy": "active_passive" 00:07:59.678 } 00:07:59.678 } 00:07:59.678 ] 00:07:59.678 19:45:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65915 00:07:59.678 19:45:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:59.678 19:45:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:59.678 Running I/O for 10 seconds... 00:08:01.051 Latency(us) 00:08:01.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.051 Nvme0n1 : 1.00 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:08:01.051 =================================================================================================================== 00:08:01.051 Total : 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:08:01.051 00:08:01.619 19:45:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ee59787c-87b3-440f-94f6-8307715d550d 00:08:01.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.877 Nvme0n1 : 2.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:08:01.877 =================================================================================================================== 00:08:01.877 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:08:01.877 00:08:01.877 true 00:08:01.877 19:45:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee59787c-87b3-440f-94f6-8307715d550d 00:08:01.877 19:45:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:02.444 19:45:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:02.444 19:45:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:02.444 19:45:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65915 00:08:02.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.702 Nvme0n1 : 3.00 7662.33 29.93 0.00 0.00 0.00 0.00 0.00 00:08:02.702 =================================================================================================================== 00:08:02.702 Total : 7662.33 29.93 0.00 0.00 0.00 0.00 0.00 00:08:02.702 00:08:04.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.077 Nvme0n1 : 4.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:04.077 =================================================================================================================== 00:08:04.077 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:04.077 00:08:05.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.013 Nvme0n1 : 5.00 7543.80 29.47 0.00 0.00 0.00 0.00 0.00 00:08:05.013 =================================================================================================================== 00:08:05.013 Total : 7543.80 29.47 0.00 0.00 0.00 0.00 0.00 00:08:05.013 00:08:05.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.952 Nvme0n1 : 6.00 7514.17 29.35 0.00 0.00 0.00 0.00 0.00 00:08:05.952 =================================================================================================================== 00:08:05.952 Total : 7514.17 29.35 0.00 0.00 0.00 0.00 0.00 00:08:05.952 00:08:06.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.887 Nvme0n1 : 7.00 7474.86 29.20 0.00 0.00 0.00 0.00 0.00 00:08:06.887 =================================================================================================================== 00:08:06.887 Total : 7474.86 29.20 0.00 0.00 0.00 0.00 0.00 00:08:06.887 00:08:07.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.822 Nvme0n1 : 8.00 7445.38 29.08 0.00 0.00 0.00 0.00 0.00 00:08:07.822 =================================================================================================================== 00:08:07.822 Total : 7445.38 29.08 0.00 0.00 0.00 0.00 0.00 00:08:07.822 00:08:08.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.759 Nvme0n1 : 9.00 7436.56 29.05 0.00 0.00 0.00 0.00 0.00 00:08:08.759 =================================================================================================================== 00:08:08.759 Total : 7436.56 29.05 0.00 0.00 0.00 0.00 0.00 00:08:08.759 00:08:09.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.693 Nvme0n1 : 10.00 7416.80 28.97 0.00 0.00 0.00 0.00 0.00 00:08:09.693 =================================================================================================================== 00:08:09.693 Total : 7416.80 28.97 0.00 0.00 0.00 0.00 0.00 00:08:09.693 00:08:09.694 00:08:09.694 Latency(us) 00:08:09.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.694 Nvme0n1 : 10.01 7420.18 28.99 0.00 0.00 17244.65 14358.34 37891.72 00:08:09.694 =================================================================================================================== 00:08:09.694 Total : 7420.18 28.99 0.00 0.00 17244.65 14358.34 37891.72 00:08:09.694 0 00:08:09.694 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65891 00:08:09.694 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65891 ']' 00:08:09.694 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65891 00:08:09.694 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:09.694 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.952 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65891 00:08:09.952 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:09.952 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:09.952 killing process with pid 65891 00:08:09.952 Received shutdown signal, test time was about 10.000000 seconds 00:08:09.952 00:08:09.952 Latency(us) 00:08:09.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.952 =================================================================================================================== 00:08:09.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:09.952 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65891' 00:08:09.952 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65891 00:08:09.952 19:46:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65891 00:08:09.952 19:46:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.212 19:46:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:10.779 19:46:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:10.779 19:46:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee59787c-87b3-440f-94f6-8307715d550d 00:08:10.779 19:46:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:10.779 19:46:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:10.779 19:46:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.037 [2024-07-15 19:46:05.253918] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee59787c-87b3-440f-94f6-8307715d550d 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee59787c-87b3-440f-94f6-8307715d550d 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:11.294 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee59787c-87b3-440f-94f6-8307715d550d 00:08:11.552 request: 00:08:11.552 { 00:08:11.552 "uuid": "ee59787c-87b3-440f-94f6-8307715d550d", 00:08:11.552 "method": "bdev_lvol_get_lvstores", 00:08:11.552 "req_id": 1 00:08:11.552 } 00:08:11.552 Got JSON-RPC error response 00:08:11.552 response: 00:08:11.552 { 00:08:11.552 "code": -19, 00:08:11.552 "message": "No such device" 00:08:11.552 } 00:08:11.552 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:11.552 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.552 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.552 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.552 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.810 aio_bdev 00:08:11.810 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0eb67fb3-237e-422f-9028-eaae11b7e31e 00:08:11.810 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=0eb67fb3-237e-422f-9028-eaae11b7e31e 00:08:11.810 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:11.810 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:11.810 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:11.810 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:11.810 19:46:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:12.068 19:46:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0eb67fb3-237e-422f-9028-eaae11b7e31e -t 2000 00:08:12.327 [ 00:08:12.327 { 00:08:12.327 "name": "0eb67fb3-237e-422f-9028-eaae11b7e31e", 00:08:12.327 "aliases": [ 00:08:12.327 "lvs/lvol" 00:08:12.327 ], 00:08:12.327 "product_name": "Logical Volume", 00:08:12.327 "block_size": 4096, 00:08:12.327 "num_blocks": 38912, 00:08:12.327 "uuid": "0eb67fb3-237e-422f-9028-eaae11b7e31e", 00:08:12.327 "assigned_rate_limits": { 00:08:12.327 "rw_ios_per_sec": 0, 00:08:12.327 "rw_mbytes_per_sec": 0, 00:08:12.327 "r_mbytes_per_sec": 0, 00:08:12.327 "w_mbytes_per_sec": 0 00:08:12.327 }, 00:08:12.327 "claimed": false, 00:08:12.327 "zoned": false, 00:08:12.327 "supported_io_types": { 00:08:12.327 "read": true, 00:08:12.327 "write": true, 00:08:12.327 "unmap": true, 00:08:12.327 "flush": false, 00:08:12.327 "reset": true, 00:08:12.327 "nvme_admin": false, 00:08:12.327 "nvme_io": false, 00:08:12.327 "nvme_io_md": false, 00:08:12.327 "write_zeroes": true, 00:08:12.327 "zcopy": false, 00:08:12.327 "get_zone_info": false, 00:08:12.327 "zone_management": false, 00:08:12.327 "zone_append": false, 00:08:12.327 "compare": false, 00:08:12.327 "compare_and_write": false, 00:08:12.327 "abort": false, 00:08:12.327 "seek_hole": true, 00:08:12.327 "seek_data": true, 00:08:12.327 "copy": false, 00:08:12.327 "nvme_iov_md": false 00:08:12.327 }, 00:08:12.327 "driver_specific": { 00:08:12.327 "lvol": { 00:08:12.327 "lvol_store_uuid": "ee59787c-87b3-440f-94f6-8307715d550d", 00:08:12.327 "base_bdev": "aio_bdev", 00:08:12.327 "thin_provision": false, 00:08:12.327 "num_allocated_clusters": 38, 00:08:12.327 "snapshot": false, 00:08:12.327 "clone": false, 00:08:12.327 "esnap_clone": false 00:08:12.327 } 00:08:12.327 } 00:08:12.327 } 00:08:12.327 ] 00:08:12.327 19:46:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:12.328 19:46:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee59787c-87b3-440f-94f6-8307715d550d 00:08:12.328 19:46:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:12.587 19:46:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:12.587 19:46:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee59787c-87b3-440f-94f6-8307715d550d 00:08:12.587 19:46:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:12.846 19:46:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:12.846 19:46:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0eb67fb3-237e-422f-9028-eaae11b7e31e 00:08:13.106 19:46:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee59787c-87b3-440f-94f6-8307715d550d 00:08:13.364 19:46:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.625 19:46:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.884 ************************************ 00:08:13.884 END TEST lvs_grow_clean 00:08:13.884 ************************************ 00:08:13.884 00:08:13.884 real 0m18.521s 00:08:13.884 user 0m17.362s 00:08:13.884 sys 0m2.569s 00:08:13.884 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.884 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:13.884 19:46:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:13.884 19:46:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:13.884 19:46:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:13.884 19:46:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.884 19:46:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:14.142 ************************************ 00:08:14.142 START TEST lvs_grow_dirty 00:08:14.142 ************************************ 00:08:14.142 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:14.142 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:14.142 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:14.142 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:14.142 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:14.142 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:14.142 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:14.142 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:14.143 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:14.143 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:14.401 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:14.401 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:14.660 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:14.660 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:14.660 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:14.919 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:14.919 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:14.919 19:46:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7861c2e4-46c3-4fac-8720-17568e9c813f lvol 150 00:08:15.178 19:46:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c61887ac-7118-44fd-821c-33c1598130c5 00:08:15.178 19:46:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:15.178 19:46:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:15.436 [2024-07-15 19:46:09.509806] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:15.436 [2024-07-15 19:46:09.509954] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:15.436 true 00:08:15.437 19:46:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:15.437 19:46:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:15.695 19:46:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:15.695 19:46:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.954 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c61887ac-7118-44fd-821c-33c1598130c5 00:08:16.212 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:16.472 [2024-07-15 19:46:10.514424] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.472 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66166 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66166 /var/tmp/bdevperf.sock 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66166 ']' 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.731 19:46:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.731 [2024-07-15 19:46:10.833793] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:16.731 [2024-07-15 19:46:10.833888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66166 ] 00:08:16.989 [2024-07-15 19:46:10.976306] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.989 [2024-07-15 19:46:11.099140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.989 [2024-07-15 19:46:11.160717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.925 19:46:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.925 19:46:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:17.925 19:46:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:17.925 Nvme0n1 00:08:17.925 19:46:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:18.182 [ 00:08:18.182 { 00:08:18.182 "name": "Nvme0n1", 00:08:18.182 "aliases": [ 00:08:18.182 "c61887ac-7118-44fd-821c-33c1598130c5" 00:08:18.182 ], 00:08:18.182 "product_name": "NVMe disk", 00:08:18.182 "block_size": 4096, 00:08:18.182 "num_blocks": 38912, 00:08:18.182 "uuid": "c61887ac-7118-44fd-821c-33c1598130c5", 00:08:18.182 "assigned_rate_limits": { 00:08:18.182 "rw_ios_per_sec": 0, 00:08:18.182 "rw_mbytes_per_sec": 0, 00:08:18.182 "r_mbytes_per_sec": 0, 00:08:18.182 "w_mbytes_per_sec": 0 00:08:18.182 }, 00:08:18.182 "claimed": false, 00:08:18.182 "zoned": false, 00:08:18.182 "supported_io_types": { 00:08:18.182 "read": true, 00:08:18.182 "write": true, 00:08:18.182 "unmap": true, 00:08:18.182 "flush": true, 00:08:18.182 "reset": true, 00:08:18.182 "nvme_admin": true, 00:08:18.182 "nvme_io": true, 00:08:18.182 "nvme_io_md": false, 00:08:18.182 "write_zeroes": true, 00:08:18.182 "zcopy": false, 00:08:18.182 "get_zone_info": false, 00:08:18.182 "zone_management": false, 00:08:18.182 "zone_append": false, 00:08:18.182 "compare": true, 00:08:18.182 "compare_and_write": true, 00:08:18.182 "abort": true, 00:08:18.182 "seek_hole": false, 00:08:18.182 "seek_data": false, 00:08:18.182 "copy": true, 00:08:18.182 "nvme_iov_md": false 00:08:18.182 }, 00:08:18.182 "memory_domains": [ 00:08:18.182 { 00:08:18.182 "dma_device_id": "system", 00:08:18.182 "dma_device_type": 1 00:08:18.182 } 00:08:18.182 ], 00:08:18.182 "driver_specific": { 00:08:18.182 "nvme": [ 00:08:18.182 { 00:08:18.182 "trid": { 00:08:18.182 "trtype": "TCP", 00:08:18.182 "adrfam": "IPv4", 00:08:18.182 "traddr": "10.0.0.2", 00:08:18.182 "trsvcid": "4420", 00:08:18.182 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:18.182 }, 00:08:18.182 "ctrlr_data": { 00:08:18.182 "cntlid": 1, 00:08:18.182 "vendor_id": "0x8086", 00:08:18.182 "model_number": "SPDK bdev Controller", 00:08:18.182 "serial_number": "SPDK0", 00:08:18.182 "firmware_revision": "24.09", 00:08:18.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.182 "oacs": { 00:08:18.182 "security": 0, 00:08:18.182 "format": 0, 00:08:18.182 "firmware": 0, 00:08:18.182 "ns_manage": 0 00:08:18.182 }, 00:08:18.182 "multi_ctrlr": true, 00:08:18.182 "ana_reporting": false 00:08:18.182 }, 00:08:18.182 "vs": { 00:08:18.182 "nvme_version": "1.3" 00:08:18.182 }, 00:08:18.182 "ns_data": { 00:08:18.182 "id": 1, 00:08:18.182 "can_share": true 00:08:18.182 } 00:08:18.182 } 00:08:18.182 ], 00:08:18.182 "mp_policy": "active_passive" 00:08:18.182 } 00:08:18.182 } 00:08:18.182 ] 00:08:18.182 19:46:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66184 00:08:18.183 19:46:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:18.183 19:46:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:18.441 Running I/O for 10 seconds... 00:08:19.375 Latency(us) 00:08:19.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.375 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:19.375 =================================================================================================================== 00:08:19.375 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:19.375 00:08:20.309 19:46:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:20.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.309 Nvme0n1 : 2.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:08:20.309 =================================================================================================================== 00:08:20.309 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:08:20.309 00:08:20.567 true 00:08:20.567 19:46:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:20.567 19:46:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:20.825 19:46:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:20.825 19:46:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:20.825 19:46:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66184 00:08:21.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.392 Nvme0n1 : 3.00 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:21.392 =================================================================================================================== 00:08:21.392 Total : 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:21.392 00:08:22.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.327 Nvme0n1 : 4.00 7715.25 30.14 0.00 0.00 0.00 0.00 0.00 00:08:22.327 =================================================================================================================== 00:08:22.327 Total : 7715.25 30.14 0.00 0.00 0.00 0.00 0.00 00:08:22.327 00:08:23.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.701 Nvme0n1 : 5.00 7670.80 29.96 0.00 0.00 0.00 0.00 0.00 00:08:23.701 =================================================================================================================== 00:08:23.701 Total : 7670.80 29.96 0.00 0.00 0.00 0.00 0.00 00:08:23.701 00:08:24.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.637 Nvme0n1 : 6.00 7369.83 28.79 0.00 0.00 0.00 0.00 0.00 00:08:24.637 =================================================================================================================== 00:08:24.637 Total : 7369.83 28.79 0.00 0.00 0.00 0.00 0.00 00:08:24.637 00:08:25.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.590 Nvme0n1 : 7.00 7351.14 28.72 0.00 0.00 0.00 0.00 0.00 00:08:25.590 =================================================================================================================== 00:08:25.590 Total : 7351.14 28.72 0.00 0.00 0.00 0.00 0.00 00:08:25.590 00:08:26.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.562 Nvme0n1 : 8.00 7321.25 28.60 0.00 0.00 0.00 0.00 0.00 00:08:26.562 =================================================================================================================== 00:08:26.562 Total : 7321.25 28.60 0.00 0.00 0.00 0.00 0.00 00:08:26.562 00:08:27.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.498 Nvme0n1 : 9.00 7319.22 28.59 0.00 0.00 0.00 0.00 0.00 00:08:27.498 =================================================================================================================== 00:08:27.498 Total : 7319.22 28.59 0.00 0.00 0.00 0.00 0.00 00:08:27.498 00:08:28.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.434 Nvme0n1 : 10.00 7323.90 28.61 0.00 0.00 0.00 0.00 0.00 00:08:28.434 =================================================================================================================== 00:08:28.434 Total : 7323.90 28.61 0.00 0.00 0.00 0.00 0.00 00:08:28.434 00:08:28.434 00:08:28.434 Latency(us) 00:08:28.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.434 Nvme0n1 : 10.02 7321.35 28.60 0.00 0.00 17476.43 4289.63 257377.75 00:08:28.434 =================================================================================================================== 00:08:28.434 Total : 7321.35 28.60 0.00 0.00 17476.43 4289.63 257377.75 00:08:28.434 0 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66166 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66166 ']' 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66166 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66166 00:08:28.434 killing process with pid 66166 00:08:28.434 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.434 00:08:28.434 Latency(us) 00:08:28.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.434 =================================================================================================================== 00:08:28.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66166' 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66166 00:08:28.434 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66166 00:08:28.692 19:46:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.950 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:29.209 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:29.209 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:29.467 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:29.468 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:29.468 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65803 00:08:29.468 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65803 00:08:29.468 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65803 Killed "${NVMF_APP[@]}" "$@" 00:08:29.468 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:29.468 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:29.468 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.468 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.468 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.725 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66322 00:08:29.725 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:29.725 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66322 00:08:29.725 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66322 ']' 00:08:29.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.725 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.725 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.725 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.725 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.725 19:46:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.725 [2024-07-15 19:46:23.767377] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:29.725 [2024-07-15 19:46:23.767455] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.725 [2024-07-15 19:46:23.905915] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.982 [2024-07-15 19:46:24.021525] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.982 [2024-07-15 19:46:24.021720] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.982 [2024-07-15 19:46:24.021740] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.982 [2024-07-15 19:46:24.021749] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.982 [2024-07-15 19:46:24.021757] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.982 [2024-07-15 19:46:24.021784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.982 [2024-07-15 19:46:24.077304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.547 19:46:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.547 19:46:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:30.547 19:46:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.547 19:46:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.547 19:46:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:30.547 19:46:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.547 19:46:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.860 [2024-07-15 19:46:24.984514] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:30.860 [2024-07-15 19:46:24.985020] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:30.860 [2024-07-15 19:46:24.985344] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:30.860 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:30.861 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c61887ac-7118-44fd-821c-33c1598130c5 00:08:30.861 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c61887ac-7118-44fd-821c-33c1598130c5 00:08:30.861 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:30.861 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:30.861 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:30.861 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:30.861 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.118 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c61887ac-7118-44fd-821c-33c1598130c5 -t 2000 00:08:31.376 [ 00:08:31.376 { 00:08:31.376 "name": "c61887ac-7118-44fd-821c-33c1598130c5", 00:08:31.376 "aliases": [ 00:08:31.376 "lvs/lvol" 00:08:31.376 ], 00:08:31.376 "product_name": "Logical Volume", 00:08:31.376 "block_size": 4096, 00:08:31.376 "num_blocks": 38912, 00:08:31.376 "uuid": "c61887ac-7118-44fd-821c-33c1598130c5", 00:08:31.376 "assigned_rate_limits": { 00:08:31.376 "rw_ios_per_sec": 0, 00:08:31.376 "rw_mbytes_per_sec": 0, 00:08:31.376 "r_mbytes_per_sec": 0, 00:08:31.376 "w_mbytes_per_sec": 0 00:08:31.376 }, 00:08:31.376 "claimed": false, 00:08:31.376 "zoned": false, 00:08:31.376 "supported_io_types": { 00:08:31.376 "read": true, 00:08:31.376 "write": true, 00:08:31.376 "unmap": true, 00:08:31.376 "flush": false, 00:08:31.376 "reset": true, 00:08:31.376 "nvme_admin": false, 00:08:31.376 "nvme_io": false, 00:08:31.376 "nvme_io_md": false, 00:08:31.376 "write_zeroes": true, 00:08:31.376 "zcopy": false, 00:08:31.376 "get_zone_info": false, 00:08:31.376 "zone_management": false, 00:08:31.376 "zone_append": false, 00:08:31.376 "compare": false, 00:08:31.376 "compare_and_write": false, 00:08:31.376 "abort": false, 00:08:31.376 "seek_hole": true, 00:08:31.376 "seek_data": true, 00:08:31.376 "copy": false, 00:08:31.376 "nvme_iov_md": false 00:08:31.376 }, 00:08:31.376 "driver_specific": { 00:08:31.376 "lvol": { 00:08:31.376 "lvol_store_uuid": "7861c2e4-46c3-4fac-8720-17568e9c813f", 00:08:31.376 "base_bdev": "aio_bdev", 00:08:31.376 "thin_provision": false, 00:08:31.376 "num_allocated_clusters": 38, 00:08:31.376 "snapshot": false, 00:08:31.376 "clone": false, 00:08:31.376 "esnap_clone": false 00:08:31.376 } 00:08:31.376 } 00:08:31.376 } 00:08:31.376 ] 00:08:31.376 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:31.376 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:31.376 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:31.634 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:31.634 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:31.634 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:31.892 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:31.892 19:46:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.149 [2024-07-15 19:46:26.198137] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:32.149 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:32.407 request: 00:08:32.407 { 00:08:32.407 "uuid": "7861c2e4-46c3-4fac-8720-17568e9c813f", 00:08:32.407 "method": "bdev_lvol_get_lvstores", 00:08:32.407 "req_id": 1 00:08:32.407 } 00:08:32.407 Got JSON-RPC error response 00:08:32.407 response: 00:08:32.407 { 00:08:32.407 "code": -19, 00:08:32.407 "message": "No such device" 00:08:32.407 } 00:08:32.407 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:32.407 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:32.407 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:32.407 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:32.407 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.664 aio_bdev 00:08:32.664 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c61887ac-7118-44fd-821c-33c1598130c5 00:08:32.664 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c61887ac-7118-44fd-821c-33c1598130c5 00:08:32.664 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:32.664 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:32.664 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:32.664 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:32.664 19:46:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:32.923 19:46:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c61887ac-7118-44fd-821c-33c1598130c5 -t 2000 00:08:33.182 [ 00:08:33.182 { 00:08:33.182 "name": "c61887ac-7118-44fd-821c-33c1598130c5", 00:08:33.182 "aliases": [ 00:08:33.182 "lvs/lvol" 00:08:33.182 ], 00:08:33.182 "product_name": "Logical Volume", 00:08:33.182 "block_size": 4096, 00:08:33.182 "num_blocks": 38912, 00:08:33.182 "uuid": "c61887ac-7118-44fd-821c-33c1598130c5", 00:08:33.182 "assigned_rate_limits": { 00:08:33.182 "rw_ios_per_sec": 0, 00:08:33.182 "rw_mbytes_per_sec": 0, 00:08:33.182 "r_mbytes_per_sec": 0, 00:08:33.182 "w_mbytes_per_sec": 0 00:08:33.182 }, 00:08:33.182 "claimed": false, 00:08:33.182 "zoned": false, 00:08:33.182 "supported_io_types": { 00:08:33.182 "read": true, 00:08:33.182 "write": true, 00:08:33.182 "unmap": true, 00:08:33.182 "flush": false, 00:08:33.182 "reset": true, 00:08:33.182 "nvme_admin": false, 00:08:33.182 "nvme_io": false, 00:08:33.182 "nvme_io_md": false, 00:08:33.182 "write_zeroes": true, 00:08:33.182 "zcopy": false, 00:08:33.182 "get_zone_info": false, 00:08:33.182 "zone_management": false, 00:08:33.182 "zone_append": false, 00:08:33.182 "compare": false, 00:08:33.182 "compare_and_write": false, 00:08:33.182 "abort": false, 00:08:33.182 "seek_hole": true, 00:08:33.182 "seek_data": true, 00:08:33.182 "copy": false, 00:08:33.182 "nvme_iov_md": false 00:08:33.182 }, 00:08:33.182 "driver_specific": { 00:08:33.182 "lvol": { 00:08:33.182 "lvol_store_uuid": "7861c2e4-46c3-4fac-8720-17568e9c813f", 00:08:33.182 "base_bdev": "aio_bdev", 00:08:33.182 "thin_provision": false, 00:08:33.182 "num_allocated_clusters": 38, 00:08:33.182 "snapshot": false, 00:08:33.182 "clone": false, 00:08:33.182 "esnap_clone": false 00:08:33.182 } 00:08:33.182 } 00:08:33.182 } 00:08:33.182 ] 00:08:33.182 19:46:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:33.182 19:46:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:33.182 19:46:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:33.441 19:46:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:33.441 19:46:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:33.441 19:46:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:33.699 19:46:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:33.699 19:46:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c61887ac-7118-44fd-821c-33c1598130c5 00:08:33.957 19:46:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7861c2e4-46c3-4fac-8720-17568e9c813f 00:08:34.216 19:46:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:34.474 19:46:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:34.732 00:08:34.732 real 0m20.761s 00:08:34.732 user 0m44.002s 00:08:34.732 sys 0m8.298s 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.732 ************************************ 00:08:34.732 END TEST lvs_grow_dirty 00:08:34.732 ************************************ 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:34.732 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:34.732 nvmf_trace.0 00:08:34.991 19:46:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:34.991 19:46:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:34.991 19:46:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.991 19:46:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.991 rmmod nvme_tcp 00:08:34.991 rmmod nvme_fabrics 00:08:34.991 rmmod nvme_keyring 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66322 ']' 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66322 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66322 ']' 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66322 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66322 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66322' 00:08:34.991 killing process with pid 66322 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66322 00:08:34.991 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66322 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:35.250 00:08:35.250 real 0m41.680s 00:08:35.250 user 1m7.593s 00:08:35.250 sys 0m11.540s 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.250 19:46:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:35.250 ************************************ 00:08:35.250 END TEST nvmf_lvs_grow 00:08:35.250 ************************************ 00:08:35.250 19:46:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:35.250 19:46:29 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:35.250 19:46:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.250 19:46:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.250 19:46:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.250 ************************************ 00:08:35.250 START TEST nvmf_bdev_io_wait 00:08:35.250 ************************************ 00:08:35.250 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:35.509 * Looking for test storage... 00:08:35.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:35.509 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:35.510 Cannot find device "nvmf_tgt_br" 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.510 Cannot find device "nvmf_tgt_br2" 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:35.510 Cannot find device "nvmf_tgt_br" 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:35.510 Cannot find device "nvmf_tgt_br2" 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:35.510 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:35.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:08:35.769 00:08:35.769 --- 10.0.0.2 ping statistics --- 00:08:35.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.769 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:35.769 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.769 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:08:35.769 00:08:35.769 --- 10.0.0.3 ping statistics --- 00:08:35.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.769 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:08:35.769 00:08:35.769 --- 10.0.0.1 ping statistics --- 00:08:35.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.769 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66629 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66629 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66629 ']' 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.769 19:46:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.769 [2024-07-15 19:46:29.954637] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:35.769 [2024-07-15 19:46:29.954737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.027 [2024-07-15 19:46:30.090160] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.027 [2024-07-15 19:46:30.203453] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.027 [2024-07-15 19:46:30.203507] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.027 [2024-07-15 19:46:30.203519] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.027 [2024-07-15 19:46:30.203528] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.027 [2024-07-15 19:46:30.203535] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.027 [2024-07-15 19:46:30.203711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.027 [2024-07-15 19:46:30.203817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.027 [2024-07-15 19:46:30.204139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.027 [2024-07-15 19:46:30.204182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.965 19:46:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.965 [2024-07-15 19:46:31.049110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.965 [2024-07-15 19:46:31.061595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.965 Malloc0 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.965 [2024-07-15 19:46:31.131066] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66669 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66671 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:36.965 { 00:08:36.965 "params": { 00:08:36.965 "name": "Nvme$subsystem", 00:08:36.965 "trtype": "$TEST_TRANSPORT", 00:08:36.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.965 "adrfam": "ipv4", 00:08:36.965 "trsvcid": "$NVMF_PORT", 00:08:36.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.965 "hdgst": ${hdgst:-false}, 00:08:36.965 "ddgst": ${ddgst:-false} 00:08:36.965 }, 00:08:36.965 "method": "bdev_nvme_attach_controller" 00:08:36.965 } 00:08:36.965 EOF 00:08:36.965 )") 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66673 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:36.965 { 00:08:36.965 "params": { 00:08:36.965 "name": "Nvme$subsystem", 00:08:36.965 "trtype": "$TEST_TRANSPORT", 00:08:36.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.965 "adrfam": "ipv4", 00:08:36.965 "trsvcid": "$NVMF_PORT", 00:08:36.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.965 "hdgst": ${hdgst:-false}, 00:08:36.965 "ddgst": ${ddgst:-false} 00:08:36.965 }, 00:08:36.965 "method": "bdev_nvme_attach_controller" 00:08:36.965 } 00:08:36.965 EOF 00:08:36.965 )") 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66675 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:36.965 { 00:08:36.965 "params": { 00:08:36.965 "name": "Nvme$subsystem", 00:08:36.965 "trtype": "$TEST_TRANSPORT", 00:08:36.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.965 "adrfam": "ipv4", 00:08:36.965 "trsvcid": "$NVMF_PORT", 00:08:36.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.965 "hdgst": ${hdgst:-false}, 00:08:36.965 "ddgst": ${ddgst:-false} 00:08:36.965 }, 00:08:36.965 "method": "bdev_nvme_attach_controller" 00:08:36.965 } 00:08:36.965 EOF 00:08:36.965 )") 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:36.965 { 00:08:36.965 "params": { 00:08:36.965 "name": "Nvme$subsystem", 00:08:36.965 "trtype": "$TEST_TRANSPORT", 00:08:36.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.965 "adrfam": "ipv4", 00:08:36.965 "trsvcid": "$NVMF_PORT", 00:08:36.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.965 "hdgst": ${hdgst:-false}, 00:08:36.965 "ddgst": ${ddgst:-false} 00:08:36.965 }, 00:08:36.965 "method": "bdev_nvme_attach_controller" 00:08:36.965 } 00:08:36.965 EOF 00:08:36.965 )") 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:36.965 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:36.966 "params": { 00:08:36.966 "name": "Nvme1", 00:08:36.966 "trtype": "tcp", 00:08:36.966 "traddr": "10.0.0.2", 00:08:36.966 "adrfam": "ipv4", 00:08:36.966 "trsvcid": "4420", 00:08:36.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.966 "hdgst": false, 00:08:36.966 "ddgst": false 00:08:36.966 }, 00:08:36.966 "method": "bdev_nvme_attach_controller" 00:08:36.966 }' 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:36.966 "params": { 00:08:36.966 "name": "Nvme1", 00:08:36.966 "trtype": "tcp", 00:08:36.966 "traddr": "10.0.0.2", 00:08:36.966 "adrfam": "ipv4", 00:08:36.966 "trsvcid": "4420", 00:08:36.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.966 "hdgst": false, 00:08:36.966 "ddgst": false 00:08:36.966 }, 00:08:36.966 "method": "bdev_nvme_attach_controller" 00:08:36.966 }' 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:36.966 "params": { 00:08:36.966 "name": "Nvme1", 00:08:36.966 "trtype": "tcp", 00:08:36.966 "traddr": "10.0.0.2", 00:08:36.966 "adrfam": "ipv4", 00:08:36.966 "trsvcid": "4420", 00:08:36.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.966 "hdgst": false, 00:08:36.966 "ddgst": false 00:08:36.966 }, 00:08:36.966 "method": "bdev_nvme_attach_controller" 00:08:36.966 }' 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:36.966 "params": { 00:08:36.966 "name": "Nvme1", 00:08:36.966 "trtype": "tcp", 00:08:36.966 "traddr": "10.0.0.2", 00:08:36.966 "adrfam": "ipv4", 00:08:36.966 "trsvcid": "4420", 00:08:36.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.966 "hdgst": false, 00:08:36.966 "ddgst": false 00:08:36.966 }, 00:08:36.966 "method": "bdev_nvme_attach_controller" 00:08:36.966 }' 00:08:36.966 [2024-07-15 19:46:31.185288] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:36.966 [2024-07-15 19:46:31.185362] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:36.966 [2024-07-15 19:46:31.185869] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:36.966 [2024-07-15 19:46:31.185921] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:36.966 19:46:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66669 00:08:36.966 [2024-07-15 19:46:31.205254] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:36.966 [2024-07-15 19:46:31.205329] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:37.224 [2024-07-15 19:46:31.220405] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:37.224 [2024-07-15 19:46:31.220953] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:37.224 [2024-07-15 19:46:31.388528] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.224 [2024-07-15 19:46:31.453814] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.481 [2024-07-15 19:46:31.507722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:37.481 [2024-07-15 19:46:31.544617] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.481 [2024-07-15 19:46:31.557581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:37.481 [2024-07-15 19:46:31.571472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:37.481 [2024-07-15 19:46:31.611523] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.481 [2024-07-15 19:46:31.619140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:37.481 [2024-07-15 19:46:31.645350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:37.481 Running I/O for 1 seconds... 00:08:37.481 [2024-07-15 19:46:31.694239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:37.481 [2024-07-15 19:46:31.700818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:37.481 Running I/O for 1 seconds... 00:08:37.739 [2024-07-15 19:46:31.746979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:37.739 Running I/O for 1 seconds... 00:08:37.739 Running I/O for 1 seconds... 00:08:38.674 00:08:38.674 Latency(us) 00:08:38.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.674 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:38.674 Nvme1n1 : 1.00 174454.21 681.46 0.00 0.00 730.96 366.78 1057.51 00:08:38.674 =================================================================================================================== 00:08:38.674 Total : 174454.21 681.46 0.00 0.00 730.96 366.78 1057.51 00:08:38.674 00:08:38.674 Latency(us) 00:08:38.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.674 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:38.674 Nvme1n1 : 1.01 10592.36 41.38 0.00 0.00 12034.35 7179.17 20852.36 00:08:38.674 =================================================================================================================== 00:08:38.674 Total : 10592.36 41.38 0.00 0.00 12034.35 7179.17 20852.36 00:08:38.674 00:08:38.674 Latency(us) 00:08:38.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.674 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:38.674 Nvme1n1 : 1.01 7687.99 30.03 0.00 0.00 16548.66 10783.65 26452.71 00:08:38.674 =================================================================================================================== 00:08:38.674 Total : 7687.99 30.03 0.00 0.00 16548.66 10783.65 26452.71 00:08:38.674 00:08:38.674 Latency(us) 00:08:38.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.674 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:38.674 Nvme1n1 : 1.01 7936.11 31.00 0.00 0.00 16054.66 5957.82 27405.96 00:08:38.674 =================================================================================================================== 00:08:38.674 Total : 7936.11 31.00 0.00 0.00 16054.66 5957.82 27405.96 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66671 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66673 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66675 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.932 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.932 rmmod nvme_tcp 00:08:38.932 rmmod nvme_fabrics 00:08:38.932 rmmod nvme_keyring 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66629 ']' 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66629 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66629 ']' 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66629 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66629 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:39.191 killing process with pid 66629 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66629' 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66629 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66629 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.191 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.449 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:39.449 00:08:39.449 real 0m4.041s 00:08:39.449 user 0m17.529s 00:08:39.449 sys 0m2.267s 00:08:39.449 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.449 19:46:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.449 ************************************ 00:08:39.449 END TEST nvmf_bdev_io_wait 00:08:39.449 ************************************ 00:08:39.449 19:46:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:39.449 19:46:33 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:39.449 19:46:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:39.449 19:46:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.449 19:46:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.449 ************************************ 00:08:39.449 START TEST nvmf_queue_depth 00:08:39.449 ************************************ 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:39.449 * Looking for test storage... 00:08:39.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.449 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:39.450 Cannot find device "nvmf_tgt_br" 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:39.450 Cannot find device "nvmf_tgt_br2" 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:39.450 Cannot find device "nvmf_tgt_br" 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:39.450 Cannot find device "nvmf_tgt_br2" 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:39.450 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:39.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:39.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:39.708 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:39.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:08:39.967 00:08:39.967 --- 10.0.0.2 ping statistics --- 00:08:39.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.967 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:39.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:39.967 00:08:39.967 --- 10.0.0.3 ping statistics --- 00:08:39.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.967 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:39.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:39.967 00:08:39.967 --- 10.0.0.1 ping statistics --- 00:08:39.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.967 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.967 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66908 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66908 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66908 ']' 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.968 19:46:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.968 [2024-07-15 19:46:34.045484] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:39.968 [2024-07-15 19:46:34.046209] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.968 [2024-07-15 19:46:34.188734] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.226 [2024-07-15 19:46:34.313348] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.226 [2024-07-15 19:46:34.313405] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.226 [2024-07-15 19:46:34.313425] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.226 [2024-07-15 19:46:34.313435] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.226 [2024-07-15 19:46:34.313445] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.226 [2024-07-15 19:46:34.313480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.226 [2024-07-15 19:46:34.371313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.163 [2024-07-15 19:46:35.100787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.163 Malloc0 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.163 [2024-07-15 19:46:35.173738] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66940 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66940 /var/tmp/bdevperf.sock 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66940 ']' 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.163 19:46:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.163 [2024-07-15 19:46:35.233455] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:41.163 [2024-07-15 19:46:35.233553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66940 ] 00:08:41.163 [2024-07-15 19:46:35.375504] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.422 [2024-07-15 19:46:35.506238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.422 [2024-07-15 19:46:35.565345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:41.990 19:46:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.990 19:46:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:41.990 19:46:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:41.990 19:46:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.990 19:46:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:42.249 NVMe0n1 00:08:42.249 19:46:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.249 19:46:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:42.249 Running I/O for 10 seconds... 00:08:54.475 00:08:54.475 Latency(us) 00:08:54.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.475 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:54.475 Verification LBA range: start 0x0 length 0x4000 00:08:54.475 NVMe0n1 : 10.09 8233.10 32.16 0.00 0.00 123792.96 28716.68 93418.59 00:08:54.475 =================================================================================================================== 00:08:54.475 Total : 8233.10 32.16 0.00 0.00 123792.96 28716.68 93418.59 00:08:54.475 0 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66940 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66940 ']' 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66940 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66940 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:54.475 killing process with pid 66940 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66940' 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66940 00:08:54.475 Received shutdown signal, test time was about 10.000000 seconds 00:08:54.475 00:08:54.475 Latency(us) 00:08:54.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.475 =================================================================================================================== 00:08:54.475 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66940 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.475 rmmod nvme_tcp 00:08:54.475 rmmod nvme_fabrics 00:08:54.475 rmmod nvme_keyring 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66908 ']' 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66908 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66908 ']' 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66908 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66908 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:54.475 killing process with pid 66908 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66908' 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66908 00:08:54.475 19:46:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66908 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:54.475 00:08:54.475 real 0m13.659s 00:08:54.475 user 0m23.374s 00:08:54.475 sys 0m2.426s 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.475 19:46:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.475 ************************************ 00:08:54.475 END TEST nvmf_queue_depth 00:08:54.475 ************************************ 00:08:54.475 19:46:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:54.475 19:46:47 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:54.475 19:46:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:54.475 19:46:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.475 19:46:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.475 ************************************ 00:08:54.475 START TEST nvmf_target_multipath 00:08:54.475 ************************************ 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:54.475 * Looking for test storage... 00:08:54.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.475 19:46:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:54.476 Cannot find device "nvmf_tgt_br" 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.476 Cannot find device "nvmf_tgt_br2" 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:54.476 Cannot find device "nvmf_tgt_br" 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:54.476 Cannot find device "nvmf_tgt_br2" 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:54.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:54.476 00:08:54.476 --- 10.0.0.2 ping statistics --- 00:08:54.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.476 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:54.476 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.476 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:08:54.476 00:08:54.476 --- 10.0.0.3 ping statistics --- 00:08:54.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.476 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:54.476 00:08:54.476 --- 10.0.0.1 ping statistics --- 00:08:54.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.476 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67265 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67265 00:08:54.476 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67265 ']' 00:08:54.477 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.477 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.477 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.477 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.477 19:46:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.477 [2024-07-15 19:46:47.747168] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:54.477 [2024-07-15 19:46:47.747296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.477 [2024-07-15 19:46:47.891945] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.477 [2024-07-15 19:46:48.024007] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.477 [2024-07-15 19:46:48.024086] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.477 [2024-07-15 19:46:48.024111] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.477 [2024-07-15 19:46:48.024122] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.477 [2024-07-15 19:46:48.024131] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.477 [2024-07-15 19:46:48.024237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.477 [2024-07-15 19:46:48.024390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.477 [2024-07-15 19:46:48.024884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.477 [2024-07-15 19:46:48.024941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.477 [2024-07-15 19:46:48.081794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:54.735 19:46:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.735 19:46:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:08:54.735 19:46:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.735 19:46:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.735 19:46:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.735 19:46:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.735 19:46:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:54.994 [2024-07-15 19:46:49.091113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.994 19:46:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:55.322 Malloc0 00:08:55.322 19:46:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:55.582 19:46:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.841 19:46:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.099 [2024-07-15 19:46:50.099922] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.099 19:46:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:56.099 [2024-07-15 19:46:50.324105] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:56.356 19:46:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid=f7fce926-7bf5-4841-86b1-6d78480abc2c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:56.356 19:46:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid=f7fce926-7bf5-4841-86b1-6d78480abc2c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:56.614 19:46:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:56.614 19:46:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:56.614 19:46:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.614 19:46:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:56.614 19:46:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:58.514 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67355 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:58.515 19:46:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:58.515 [global] 00:08:58.515 thread=1 00:08:58.515 invalidate=1 00:08:58.515 rw=randrw 00:08:58.515 time_based=1 00:08:58.515 runtime=6 00:08:58.515 ioengine=libaio 00:08:58.515 direct=1 00:08:58.515 bs=4096 00:08:58.515 iodepth=128 00:08:58.515 norandommap=0 00:08:58.515 numjobs=1 00:08:58.515 00:08:58.515 verify_dump=1 00:08:58.515 verify_backlog=512 00:08:58.515 verify_state_save=0 00:08:58.515 do_verify=1 00:08:58.515 verify=crc32c-intel 00:08:58.515 [job0] 00:08:58.515 filename=/dev/nvme0n1 00:08:58.515 Could not set queue depth (nvme0n1) 00:08:58.772 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:58.772 fio-3.35 00:08:58.772 Starting 1 thread 00:08:59.748 19:46:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:59.748 19:46:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:00.312 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:00.569 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:00.827 19:46:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67355 00:09:05.018 00:09:05.018 job0: (groupid=0, jobs=1): err= 0: pid=67380: Mon Jul 15 19:46:58 2024 00:09:05.018 read: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(242MiB/6007msec) 00:09:05.018 slat (usec): min=4, max=7491, avg=56.94, stdev=226.23 00:09:05.018 clat (usec): min=1293, max=18904, avg=8429.16, stdev=1448.63 00:09:05.018 lat (usec): min=1304, max=18912, avg=8486.10, stdev=1453.45 00:09:05.018 clat percentiles (usec): 00:09:05.018 | 1.00th=[ 4555], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7701], 00:09:05.018 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8455], 00:09:05.018 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11863], 00:09:05.018 | 99.00th=[13173], 99.50th=[13698], 99.90th=[14877], 99.95th=[15926], 00:09:05.018 | 99.99th=[18220] 00:09:05.018 bw ( KiB/s): min= 6832, max=27992, per=52.22%, avg=21525.18, stdev=7274.39, samples=11 00:09:05.018 iops : min= 1708, max= 6998, avg=5381.27, stdev=1818.58, samples=11 00:09:05.018 write: IOPS=6224, BW=24.3MiB/s (25.5MB/s)(128MiB/5262msec); 0 zone resets 00:09:05.018 slat (usec): min=14, max=2690, avg=65.51, stdev=155.79 00:09:05.018 clat (usec): min=1096, max=18140, avg=7333.06, stdev=1256.69 00:09:05.018 lat (usec): min=1120, max=18177, avg=7398.56, stdev=1260.37 00:09:05.018 clat percentiles (usec): 00:09:05.018 | 1.00th=[ 3490], 5.00th=[ 4490], 10.00th=[ 6128], 20.00th=[ 6849], 00:09:05.018 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7635], 00:09:05.018 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8586], 00:09:05.018 | 99.00th=[11207], 99.50th=[11994], 99.90th=[14222], 99.95th=[16057], 00:09:05.018 | 99.99th=[17171] 00:09:05.018 bw ( KiB/s): min= 7296, max=27392, per=86.73%, avg=21592.27, stdev=6996.34, samples=11 00:09:05.018 iops : min= 1824, max= 6848, avg=5398.00, stdev=1749.03, samples=11 00:09:05.018 lat (msec) : 2=0.04%, 4=1.16%, 10=92.99%, 20=5.81% 00:09:05.018 cpu : usr=5.39%, sys=22.38%, ctx=5497, majf=0, minf=96 00:09:05.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:05.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.018 issued rwts: total=61897,32751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.018 00:09:05.018 Run status group 0 (all jobs): 00:09:05.018 READ: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=242MiB (254MB), run=6007-6007msec 00:09:05.018 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=128MiB (134MB), run=5262-5262msec 00:09:05.018 00:09:05.018 Disk stats (read/write): 00:09:05.018 nvme0n1: ios=61005/32132, merge=0/0, ticks=491845/220840, in_queue=712685, util=98.66% 00:09:05.018 19:46:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:05.018 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67455 00:09:05.339 19:46:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:05.339 [global] 00:09:05.339 thread=1 00:09:05.339 invalidate=1 00:09:05.339 rw=randrw 00:09:05.339 time_based=1 00:09:05.339 runtime=6 00:09:05.339 ioengine=libaio 00:09:05.339 direct=1 00:09:05.339 bs=4096 00:09:05.339 iodepth=128 00:09:05.339 norandommap=0 00:09:05.339 numjobs=1 00:09:05.339 00:09:05.339 verify_dump=1 00:09:05.339 verify_backlog=512 00:09:05.339 verify_state_save=0 00:09:05.339 do_verify=1 00:09:05.339 verify=crc32c-intel 00:09:05.339 [job0] 00:09:05.339 filename=/dev/nvme0n1 00:09:05.339 Could not set queue depth (nvme0n1) 00:09:05.598 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.598 fio-3.35 00:09:05.598 Starting 1 thread 00:09:06.536 19:47:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:06.795 19:47:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:07.053 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:07.312 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:07.571 19:47:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67455 00:09:11.753 00:09:11.753 job0: (groupid=0, jobs=1): err= 0: pid=67476: Mon Jul 15 19:47:05 2024 00:09:11.753 read: IOPS=11.4k, BW=44.5MiB/s (46.7MB/s)(267MiB/6007msec) 00:09:11.753 slat (usec): min=6, max=6149, avg=44.45, stdev=185.05 00:09:11.753 clat (usec): min=953, max=16204, avg=7682.77, stdev=1909.55 00:09:11.753 lat (usec): min=965, max=16220, avg=7727.22, stdev=1925.24 00:09:11.753 clat percentiles (usec): 00:09:11.753 | 1.00th=[ 2999], 5.00th=[ 4424], 10.00th=[ 5014], 20.00th=[ 5932], 00:09:11.753 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8356], 00:09:11.753 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10159], 00:09:11.753 | 99.00th=[13173], 99.50th=[13698], 99.90th=[14615], 99.95th=[14877], 00:09:11.753 | 99.99th=[15664] 00:09:11.753 bw ( KiB/s): min= 9808, max=37688, per=52.60%, avg=23982.55, stdev=8829.83, samples=11 00:09:11.753 iops : min= 2452, max= 9422, avg=5995.82, stdev=2207.73, samples=11 00:09:11.753 write: IOPS=6931, BW=27.1MiB/s (28.4MB/s)(141MiB/5221msec); 0 zone resets 00:09:11.753 slat (usec): min=14, max=3320, avg=55.33, stdev=134.24 00:09:11.753 clat (usec): min=1108, max=14370, avg=6537.35, stdev=1803.58 00:09:11.753 lat (usec): min=1134, max=14395, avg=6592.67, stdev=1817.92 00:09:11.753 clat percentiles (usec): 00:09:11.753 | 1.00th=[ 2769], 5.00th=[ 3425], 10.00th=[ 3851], 20.00th=[ 4490], 00:09:11.753 | 30.00th=[ 5407], 40.00th=[ 6783], 50.00th=[ 7177], 60.00th=[ 7439], 00:09:11.753 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:09:11.753 | 99.00th=[10683], 99.50th=[11731], 99.90th=[12780], 99.95th=[13435], 00:09:11.753 | 99.99th=[14222] 00:09:11.753 bw ( KiB/s): min=10168, max=36934, per=86.57%, avg=24004.18, stdev=8650.14, samples=11 00:09:11.753 iops : min= 2542, max= 9233, avg=6001.00, stdev=2162.46, samples=11 00:09:11.753 lat (usec) : 1000=0.01% 00:09:11.753 lat (msec) : 2=0.24%, 4=6.17%, 10=89.65%, 20=3.95% 00:09:11.753 cpu : usr=5.99%, sys=24.03%, ctx=5824, majf=0, minf=157 00:09:11.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:11.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.753 issued rwts: total=68464,36190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.753 00:09:11.753 Run status group 0 (all jobs): 00:09:11.753 READ: bw=44.5MiB/s (46.7MB/s), 44.5MiB/s-44.5MiB/s (46.7MB/s-46.7MB/s), io=267MiB (280MB), run=6007-6007msec 00:09:11.753 WRITE: bw=27.1MiB/s (28.4MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=141MiB (148MB), run=5221-5221msec 00:09:11.753 00:09:11.753 Disk stats (read/write): 00:09:11.753 nvme0n1: ios=67778/35324, merge=0/0, ticks=494935/214553, in_queue=709488, util=98.70% 00:09:11.753 19:47:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:11.753 19:47:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.753 19:47:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:11.753 19:47:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:11.753 19:47:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.753 19:47:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:11.753 19:47:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.753 19:47:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:11.753 19:47:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.320 rmmod nvme_tcp 00:09:12.320 rmmod nvme_fabrics 00:09:12.320 rmmod nvme_keyring 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67265 ']' 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67265 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67265 ']' 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67265 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67265 00:09:12.320 killing process with pid 67265 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67265' 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67265 00:09:12.320 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67265 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:12.579 ************************************ 00:09:12.579 END TEST nvmf_target_multipath 00:09:12.579 ************************************ 00:09:12.579 00:09:12.579 real 0m19.444s 00:09:12.579 user 1m12.802s 00:09:12.579 sys 0m10.139s 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.579 19:47:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.579 19:47:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:12.579 19:47:06 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.579 19:47:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:12.579 19:47:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.579 19:47:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:12.579 ************************************ 00:09:12.579 START TEST nvmf_zcopy 00:09:12.579 ************************************ 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.579 * Looking for test storage... 00:09:12.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.579 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.580 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.580 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.580 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.580 19:47:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.580 19:47:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:12.841 Cannot find device "nvmf_tgt_br" 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.841 Cannot find device "nvmf_tgt_br2" 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:12.841 Cannot find device "nvmf_tgt_br" 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:12.841 Cannot find device "nvmf_tgt_br2" 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.841 19:47:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.841 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:13.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:09:13.103 00:09:13.103 --- 10.0.0.2 ping statistics --- 00:09:13.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.103 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:13.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:13.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:09:13.103 00:09:13.103 --- 10.0.0.3 ping statistics --- 00:09:13.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.103 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:13.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:09:13.103 00:09:13.103 --- 10.0.0.1 ping statistics --- 00:09:13.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.103 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67732 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67732 00:09:13.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67732 ']' 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.103 19:47:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.103 [2024-07-15 19:47:07.252548] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:09:13.103 [2024-07-15 19:47:07.252912] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.361 [2024-07-15 19:47:07.394415] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.361 [2024-07-15 19:47:07.497684] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.361 [2024-07-15 19:47:07.497791] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.361 [2024-07-15 19:47:07.497802] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.361 [2024-07-15 19:47:07.497810] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.361 [2024-07-15 19:47:07.497817] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.361 [2024-07-15 19:47:07.497841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.361 [2024-07-15 19:47:07.556045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.296 [2024-07-15 19:47:08.257624] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.296 [2024-07-15 19:47:08.273735] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.296 malloc0 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:14.296 { 00:09:14.296 "params": { 00:09:14.296 "name": "Nvme$subsystem", 00:09:14.296 "trtype": "$TEST_TRANSPORT", 00:09:14.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.296 "adrfam": "ipv4", 00:09:14.296 "trsvcid": "$NVMF_PORT", 00:09:14.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.296 "hdgst": ${hdgst:-false}, 00:09:14.296 "ddgst": ${ddgst:-false} 00:09:14.296 }, 00:09:14.296 "method": "bdev_nvme_attach_controller" 00:09:14.296 } 00:09:14.296 EOF 00:09:14.296 )") 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:14.296 19:47:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:14.296 "params": { 00:09:14.296 "name": "Nvme1", 00:09:14.296 "trtype": "tcp", 00:09:14.296 "traddr": "10.0.0.2", 00:09:14.296 "adrfam": "ipv4", 00:09:14.296 "trsvcid": "4420", 00:09:14.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.296 "hdgst": false, 00:09:14.296 "ddgst": false 00:09:14.296 }, 00:09:14.296 "method": "bdev_nvme_attach_controller" 00:09:14.296 }' 00:09:14.296 [2024-07-15 19:47:08.368971] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:09:14.296 [2024-07-15 19:47:08.369056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67765 ] 00:09:14.296 [2024-07-15 19:47:08.511679] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.556 [2024-07-15 19:47:08.643894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.556 [2024-07-15 19:47:08.711371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.815 Running I/O for 10 seconds... 00:09:24.862 00:09:24.862 Latency(us) 00:09:24.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.862 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:24.862 Verification LBA range: start 0x0 length 0x1000 00:09:24.862 Nvme1n1 : 10.01 5692.91 44.48 0.00 0.00 22414.20 450.56 31695.59 00:09:24.862 =================================================================================================================== 00:09:24.862 Total : 5692.91 44.48 0.00 0.00 22414.20 450.56 31695.59 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67880 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.862 { 00:09:24.862 "params": { 00:09:24.862 "name": "Nvme$subsystem", 00:09:24.862 "trtype": "$TEST_TRANSPORT", 00:09:24.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.862 "adrfam": "ipv4", 00:09:24.862 "trsvcid": "$NVMF_PORT", 00:09:24.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.862 "hdgst": ${hdgst:-false}, 00:09:24.862 "ddgst": ${ddgst:-false} 00:09:24.862 }, 00:09:24.862 "method": "bdev_nvme_attach_controller" 00:09:24.862 } 00:09:24.862 EOF 00:09:24.862 )") 00:09:24.862 [2024-07-15 19:47:19.084231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.862 [2024-07-15 19:47:19.084287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:24.862 19:47:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.862 "params": { 00:09:24.862 "name": "Nvme1", 00:09:24.862 "trtype": "tcp", 00:09:24.862 "traddr": "10.0.0.2", 00:09:24.862 "adrfam": "ipv4", 00:09:24.862 "trsvcid": "4420", 00:09:24.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.863 "hdgst": false, 00:09:24.863 "ddgst": false 00:09:24.863 }, 00:09:24.863 "method": "bdev_nvme_attach_controller" 00:09:24.863 }' 00:09:24.863 [2024-07-15 19:47:19.092199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.863 [2024-07-15 19:47:19.092233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.863 [2024-07-15 19:47:19.104209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.863 [2024-07-15 19:47:19.104242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.112193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.112223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.124210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.124260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.132216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.132252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.139857] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:09:25.121 [2024-07-15 19:47:19.139946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67880 ] 00:09:25.121 [2024-07-15 19:47:19.140236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.140254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.148206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.148238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.156206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.156258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.164216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.164249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.172217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.172248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.184230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.184272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.196235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.196275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.208261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.208314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.220233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.220278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.228232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.228273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.240235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.240274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.252236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.252276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.260247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.260291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.268240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.268280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.274760] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.121 [2024-07-15 19:47:19.276249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.276291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.288281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.288326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.300282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.300327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.312284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.312324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.320257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.320297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.328256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.328298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.340273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.340307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.352298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.352343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.121 [2024-07-15 19:47:19.364281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.121 [2024-07-15 19:47:19.364316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.376287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.376323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.388293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.388329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.400297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.400333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.403200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.380 [2024-07-15 19:47:19.412295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.412327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.424324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.424368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.436334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.436395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.448351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.448415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.460342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.460398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.467090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.380 [2024-07-15 19:47:19.468342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.468386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.476335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.476386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.488347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.488408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.500328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.500363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.512327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.512362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.380 [2024-07-15 19:47:19.520344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.380 [2024-07-15 19:47:19.520392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.381 [2024-07-15 19:47:19.528354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.381 [2024-07-15 19:47:19.528401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.381 [2024-07-15 19:47:19.540352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.381 [2024-07-15 19:47:19.540404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.381 [2024-07-15 19:47:19.548356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.381 [2024-07-15 19:47:19.548412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.381 [2024-07-15 19:47:19.560364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.381 [2024-07-15 19:47:19.560416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.381 [2024-07-15 19:47:19.572409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.381 [2024-07-15 19:47:19.572449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.381 [2024-07-15 19:47:19.584414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.381 [2024-07-15 19:47:19.584457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.381 Running I/O for 5 seconds... 00:09:25.381 [2024-07-15 19:47:19.596448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.381 [2024-07-15 19:47:19.596500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.381 [2024-07-15 19:47:19.611406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.381 [2024-07-15 19:47:19.611449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.626593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.626632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.636777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.636815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.649082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.649135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.664177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.664219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.680014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.680070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.694767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.694823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.711219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.711275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.728077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.728155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.745090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.745170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.761209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.761276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.778014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.778061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.789168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.789212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.803132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.803172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.815880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.815920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.828393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.828436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.840349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.840407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.857057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.857104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.868119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.868165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.639 [2024-07-15 19:47:19.881949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.639 [2024-07-15 19:47:19.881995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:19.894822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:19.894869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:19.907850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:19.907916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:19.921090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:19.921157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:19.934052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:19.934096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:19.946493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:19.946537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:19.959049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:19.959099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:19.972202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:19.972250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:19.984744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:19.984789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:19.997427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:19.997473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:20.009487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:20.009538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:20.022207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:20.022248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:20.034859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:20.034918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:20.047876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:20.047925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:20.060652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.897 [2024-07-15 19:47:20.060699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.897 [2024-07-15 19:47:20.073715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.898 [2024-07-15 19:47:20.073761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.898 [2024-07-15 19:47:20.086515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.898 [2024-07-15 19:47:20.086563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.898 [2024-07-15 19:47:20.099145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.898 [2024-07-15 19:47:20.099192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.898 [2024-07-15 19:47:20.111724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.898 [2024-07-15 19:47:20.111769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.898 [2024-07-15 19:47:20.124440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.898 [2024-07-15 19:47:20.124484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.898 [2024-07-15 19:47:20.137368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.898 [2024-07-15 19:47:20.137412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.150703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.150750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.163705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.163755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.176319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.176357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.188900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.188946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.200995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.201037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.213911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.213959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.226914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.226959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.239637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.239691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.253037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.253090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.266188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.266253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.284456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.284522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.295716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.295765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.308062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.308109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.320517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.320568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.333338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.333395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.346580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.346633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.359701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.155 [2024-07-15 19:47:20.359754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.155 [2024-07-15 19:47:20.372874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.156 [2024-07-15 19:47:20.372919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.156 [2024-07-15 19:47:20.385676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.156 [2024-07-15 19:47:20.385728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.156 [2024-07-15 19:47:20.399108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.156 [2024-07-15 19:47:20.399167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.412343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.412399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.424960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.425011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.441688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.441753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.456678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.456730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.467204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.467275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.479924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.479989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.492759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.492817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.505410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.505468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.517643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.517694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.534649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.534717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.546892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.546953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.559653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.559715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.572115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.572180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.584537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.584620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.597650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.597720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.610442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.610497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.623370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.623435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.636498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.636562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.413 [2024-07-15 19:47:20.649952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.413 [2024-07-15 19:47:20.650023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.663118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.663187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.676185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.676257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.689593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.689693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.702759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.702823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.715776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.715835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.728770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.728840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.740827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.740884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.753412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.753467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.771076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.771144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.783617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.783676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.797053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.797111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.810384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.810443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.823392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.823448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.836790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.836845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.849648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.849712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.862646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.862719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.875964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.876039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.888805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.888864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.901983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.902045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.671 [2024-07-15 19:47:20.914841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.671 [2024-07-15 19:47:20.914898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:20.927954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:20.928034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:20.941077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:20.941159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:20.953707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:20.953769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:20.967669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:20.967729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:20.981506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:20.981567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:20.994718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:20.994778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.007510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.007558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.023286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.023347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.034388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.034444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.048049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.048109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.063623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.063698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.079120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.079207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.095835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.095906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.107277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.107334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.121536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.121603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.134427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.134479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.146996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.147057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.160127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.160182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.929 [2024-07-15 19:47:21.172486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.929 [2024-07-15 19:47:21.172540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.185166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.185226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.198210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.198257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.211105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.211154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.224005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.224051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.236674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.236737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.249557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.249609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.262428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.262483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.275429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.275474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.288492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.288549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.303962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.304035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.315728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.315783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.328813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.328873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.342034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.342101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.355254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.355346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.367989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.368072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.381004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.381075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.394492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.394584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.408105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.408162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.187 [2024-07-15 19:47:21.421056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.187 [2024-07-15 19:47:21.421116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.434032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.434091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.447462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.447515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.460770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.460823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.474092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.474150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.485752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.485803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.496984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.497041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.508217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.508278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.519145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.519187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.531345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.531388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.541633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.541686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.554377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.554441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.566833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.566890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.579735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.579792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.593072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.593133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.605715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.605781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.619222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.619331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.632423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.632484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.645211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.645286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.658504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.658557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.671401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.671473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.445 [2024-07-15 19:47:21.684360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.445 [2024-07-15 19:47:21.684424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.696791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.696845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.710062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.710121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.722888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.722965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.735948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.736033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.751990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.752077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.764081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.764159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.777570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.777633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.794243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.794306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.809675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.809738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.826043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.826108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.838281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.838339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.855807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.855882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.871319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.871381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.887355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.887415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.898973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.899037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.911503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.911563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.924182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.924238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.703 [2024-07-15 19:47:21.937384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.703 [2024-07-15 19:47:21.937441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.961 [2024-07-15 19:47:21.950445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.961 [2024-07-15 19:47:21.950499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.961 [2024-07-15 19:47:21.963688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.961 [2024-07-15 19:47:21.963748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.961 [2024-07-15 19:47:21.976728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.961 [2024-07-15 19:47:21.976791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.961 [2024-07-15 19:47:21.992428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.961 [2024-07-15 19:47:21.992494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.961 [2024-07-15 19:47:22.007631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.961 [2024-07-15 19:47:22.007700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.961 [2024-07-15 19:47:22.023680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.961 [2024-07-15 19:47:22.023756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.961 [2024-07-15 19:47:22.040673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.961 [2024-07-15 19:47:22.040753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.961 [2024-07-15 19:47:22.057655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.057726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.069369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.069437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.081326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.081384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.094362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.094403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.109738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.109799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.122211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.122298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.135332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.135398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.148454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.148510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.161335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.161386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.174442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.174493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.187461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.187514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.962 [2024-07-15 19:47:22.201045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.962 [2024-07-15 19:47:22.201091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.214213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.214258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.227531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.227577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.240372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.240428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.253658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.253713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.266561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.266611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.279314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.279372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.292565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.292612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.305614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.305657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.318002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.318044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.329111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.329159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.341823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.341890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.358523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.358612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.370034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.370085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.382855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.382910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.395754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.395808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.408248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.408316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.421022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.421074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.434023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.434077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.446800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.446850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.219 [2024-07-15 19:47:22.459728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.219 [2024-07-15 19:47:22.459780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.476 [2024-07-15 19:47:22.472606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.476 [2024-07-15 19:47:22.472672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.476 [2024-07-15 19:47:22.488658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.476 [2024-07-15 19:47:22.488717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.476 [2024-07-15 19:47:22.499241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.476 [2024-07-15 19:47:22.499307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.476 [2024-07-15 19:47:22.513425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.476 [2024-07-15 19:47:22.513486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.476 [2024-07-15 19:47:22.526456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.476 [2024-07-15 19:47:22.526509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.476 [2024-07-15 19:47:22.539871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.539928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.555591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.555660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.567026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.567088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.580105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.580159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.592634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.592682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.605683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.605733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.618042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.618099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.630893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.630969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.644285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.644342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.657463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.657530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.669925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.669971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.682660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.682706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.695368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.695412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.477 [2024-07-15 19:47:22.707779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.477 [2024-07-15 19:47:22.707822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.734 [2024-07-15 19:47:22.720739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.734 [2024-07-15 19:47:22.720819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.734 [2024-07-15 19:47:22.734385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.734 [2024-07-15 19:47:22.734460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.734 [2024-07-15 19:47:22.747582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.734 [2024-07-15 19:47:22.747644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.734 [2024-07-15 19:47:22.759740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.734 [2024-07-15 19:47:22.759798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.734 [2024-07-15 19:47:22.770973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.734 [2024-07-15 19:47:22.771020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.734 [2024-07-15 19:47:22.782486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.734 [2024-07-15 19:47:22.782528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.734 [2024-07-15 19:47:22.793824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.734 [2024-07-15 19:47:22.793868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.734 [2024-07-15 19:47:22.805185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.734 [2024-07-15 19:47:22.805223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.734 [2024-07-15 19:47:22.818074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.734 [2024-07-15 19:47:22.818115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.828947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.828984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.840827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.840863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.851810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.851846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.863859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.863896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.873856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.873892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.885810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.885853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.897056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.897104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.910190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.910226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.920316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.920350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.932186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.932221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.943508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.943552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.954489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.954539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.965613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.965662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.735 [2024-07-15 19:47:22.978035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.735 [2024-07-15 19:47:22.978074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.992 [2024-07-15 19:47:22.987982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.992 [2024-07-15 19:47:22.988017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.992 [2024-07-15 19:47:22.999575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.992 [2024-07-15 19:47:22.999611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.992 [2024-07-15 19:47:23.011037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.992 [2024-07-15 19:47:23.011075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.992 [2024-07-15 19:47:23.024092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.024127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.034373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.034408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.046101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.046136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.056828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.056869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.068206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.068274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.079876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.079935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.091016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.091064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.102587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.102622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.113939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.113975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.125196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.125231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.136248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.136292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.147247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.147298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.158640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.158681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.169638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.169675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.180587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.180626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.191810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.191850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.203134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.203176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.214291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.214333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.993 [2024-07-15 19:47:23.225690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.993 [2024-07-15 19:47:23.225733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.249 [2024-07-15 19:47:23.236846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.249 [2024-07-15 19:47:23.236888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.249 [2024-07-15 19:47:23.252430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.249 [2024-07-15 19:47:23.252481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.249 [2024-07-15 19:47:23.267580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.249 [2024-07-15 19:47:23.267619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.249 [2024-07-15 19:47:23.277028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.249 [2024-07-15 19:47:23.277064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.249 [2024-07-15 19:47:23.289177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.289216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.300472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.300506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.313698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.313734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.323881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.323916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.337401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.337454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.350503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.350572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.362968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.363018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.374192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.374232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.390742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.390780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.401051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.401089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.412649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.412688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.424616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.424662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.436896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.436935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.449009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.449054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.459996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.460041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.472232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.472293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.250 [2024-07-15 19:47:23.482561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.250 [2024-07-15 19:47:23.482601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.507 [2024-07-15 19:47:23.495790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.507 [2024-07-15 19:47:23.495838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.507 [2024-07-15 19:47:23.510218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.507 [2024-07-15 19:47:23.510276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.507 [2024-07-15 19:47:23.524360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.507 [2024-07-15 19:47:23.524434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.507 [2024-07-15 19:47:23.539083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.507 [2024-07-15 19:47:23.539132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.507 [2024-07-15 19:47:23.553236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.507 [2024-07-15 19:47:23.553309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.507 [2024-07-15 19:47:23.567080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.567135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.581492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.581548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.595217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.595291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.610179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.610247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.623085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.623170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.637474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.637530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.653047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.653112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.666018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.666066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.677775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.677815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.690115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.690160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.707327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.707395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.722472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.722531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.738957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.739031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.508 [2024-07-15 19:47:23.749699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.508 [2024-07-15 19:47:23.749744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.765 [2024-07-15 19:47:23.763125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.765 [2024-07-15 19:47:23.763170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.765 [2024-07-15 19:47:23.775726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.765 [2024-07-15 19:47:23.775772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.765 [2024-07-15 19:47:23.788877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.765 [2024-07-15 19:47:23.788943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.765 [2024-07-15 19:47:23.801944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.801982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.814837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.814878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.827000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.827039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.839767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.839821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.852720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.852778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.865294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.865345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.877717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.877766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.890458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.890500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.902815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.902870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.914463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.914496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.926143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.926172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.940965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.941012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.951211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.951260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.963971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.964020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.979295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.979342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.766 [2024-07-15 19:47:23.995547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.766 [2024-07-15 19:47:23.995584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.012720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.012763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.029443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.029483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.045735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.045790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.062778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.062823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.078046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.078095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.094082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.094136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.113032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.113069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.128612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.128666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.145466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.145506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.159986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.160035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.175637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.175684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.185447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.185483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.201857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.201903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.216476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.216518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.231701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.231744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.247988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.248029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.024 [2024-07-15 19:47:24.264203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.024 [2024-07-15 19:47:24.264243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.281254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.281312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.297663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.297703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.314254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.314322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.330371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.330428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.341059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.341105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.355936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.355979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.372590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.372641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.389400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.389438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.405423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.405483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.424593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.424638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.439880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.439918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.449649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.449687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.464983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.465049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.476036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.476081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.490560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.490601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.507170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.507216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.282 [2024-07-15 19:47:24.524409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.282 [2024-07-15 19:47:24.524477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.540 [2024-07-15 19:47:24.540347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.540 [2024-07-15 19:47:24.540427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.540 [2024-07-15 19:47:24.550534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.540 [2024-07-15 19:47:24.550574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.540 [2024-07-15 19:47:24.565986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.540 [2024-07-15 19:47:24.566030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.540 [2024-07-15 19:47:24.582511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.540 [2024-07-15 19:47:24.582547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.598546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.598600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 00:09:30.541 Latency(us) 00:09:30.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.541 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:30.541 Nvme1n1 : 5.01 10176.65 79.51 0.00 0.00 12560.13 5064.15 25976.09 00:09:30.541 =================================================================================================================== 00:09:30.541 Total : 10176.65 79.51 0.00 0.00 12560.13 5064.15 25976.09 00:09:30.541 [2024-07-15 19:47:24.608095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.608143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.620111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.620149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.632140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.632181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.644126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.644165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.656128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.656167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.668140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.668179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.680136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.680177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.692148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.692191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.704150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.704189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.716148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.716189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.728156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.728199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.740155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.740196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.752153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.752188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.764160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.764198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.541 [2024-07-15 19:47:24.776170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.541 [2024-07-15 19:47:24.776212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.799 [2024-07-15 19:47:24.788142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.799 [2024-07-15 19:47:24.788174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.799 [2024-07-15 19:47:24.800181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.799 [2024-07-15 19:47:24.800222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.799 [2024-07-15 19:47:24.812170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.799 [2024-07-15 19:47:24.812209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.799 [2024-07-15 19:47:24.824151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.799 [2024-07-15 19:47:24.824180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.799 [2024-07-15 19:47:24.836179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.799 [2024-07-15 19:47:24.836225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.799 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67880) - No such process 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67880 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.799 delay0 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.799 19:47:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:30.799 [2024-07-15 19:47:25.037710] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:37.364 Initializing NVMe Controllers 00:09:37.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:37.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:37.364 Initialization complete. Launching workers. 00:09:37.364 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 100 00:09:37.364 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 387, failed to submit 33 00:09:37.364 success 290, unsuccess 97, failed 0 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.364 rmmod nvme_tcp 00:09:37.364 rmmod nvme_fabrics 00:09:37.364 rmmod nvme_keyring 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67732 ']' 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67732 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67732 ']' 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67732 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67732 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:37.364 killing process with pid 67732 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67732' 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67732 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67732 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:37.364 ************************************ 00:09:37.364 END TEST nvmf_zcopy 00:09:37.364 ************************************ 00:09:37.364 00:09:37.364 real 0m24.780s 00:09:37.364 user 0m40.080s 00:09:37.364 sys 0m7.139s 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.364 19:47:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.364 19:47:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:37.364 19:47:31 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:37.364 19:47:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:37.364 19:47:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.364 19:47:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.364 ************************************ 00:09:37.364 START TEST nvmf_nmic 00:09:37.364 ************************************ 00:09:37.364 19:47:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:37.623 * Looking for test storage... 00:09:37.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.623 19:47:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:37.624 Cannot find device "nvmf_tgt_br" 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.624 Cannot find device "nvmf_tgt_br2" 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:37.624 Cannot find device "nvmf_tgt_br" 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:37.624 Cannot find device "nvmf_tgt_br2" 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.624 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.882 19:47:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.882 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.882 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.882 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:37.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:09:37.882 00:09:37.882 --- 10.0.0.2 ping statistics --- 00:09:37.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.882 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:37.882 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:37.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:09:37.882 00:09:37.882 --- 10.0.0.3 ping statistics --- 00:09:37.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.882 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:37.882 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:37.883 00:09:37.883 --- 10.0.0.1 ping statistics --- 00:09:37.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.883 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68202 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68202 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68202 ']' 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.883 19:47:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.883 [2024-07-15 19:47:32.124123] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:09:37.883 [2024-07-15 19:47:32.124494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.139 [2024-07-15 19:47:32.265206] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.396 [2024-07-15 19:47:32.408017] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.396 [2024-07-15 19:47:32.408238] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.397 [2024-07-15 19:47:32.408285] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.397 [2024-07-15 19:47:32.408298] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.397 [2024-07-15 19:47:32.408320] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.397 [2024-07-15 19:47:32.408947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.397 [2024-07-15 19:47:32.409085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.397 [2024-07-15 19:47:32.409193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.397 [2024-07-15 19:47:32.409286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.397 [2024-07-15 19:47:32.471081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.962 [2024-07-15 19:47:33.161151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.962 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.220 Malloc0 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.220 [2024-07-15 19:47:33.238242] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.220 test case1: single bdev can't be used in multiple subsystems 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.220 [2024-07-15 19:47:33.262074] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:39.220 [2024-07-15 19:47:33.262130] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:39.220 [2024-07-15 19:47:33.262145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.220 request: 00:09:39.220 { 00:09:39.220 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:39.220 "namespace": { 00:09:39.220 "bdev_name": "Malloc0", 00:09:39.220 "no_auto_visible": false 00:09:39.220 }, 00:09:39.220 "method": "nvmf_subsystem_add_ns", 00:09:39.220 "req_id": 1 00:09:39.220 } 00:09:39.220 Got JSON-RPC error response 00:09:39.220 response: 00:09:39.220 { 00:09:39.220 "code": -32602, 00:09:39.220 "message": "Invalid parameters" 00:09:39.220 } 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:39.220 Adding namespace failed - expected result. 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:39.220 test case2: host connect to nvmf target in multiple paths 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.220 [2024-07-15 19:47:33.278247] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid=f7fce926-7bf5-4841-86b1-6d78480abc2c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.220 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid=f7fce926-7bf5-4841-86b1-6d78480abc2c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:39.479 19:47:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.479 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:39.479 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.479 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:39.479 19:47:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:41.390 19:47:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:41.391 19:47:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:41.391 19:47:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.391 19:47:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:41.391 19:47:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.391 19:47:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:41.391 19:47:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:41.391 [global] 00:09:41.391 thread=1 00:09:41.391 invalidate=1 00:09:41.391 rw=write 00:09:41.391 time_based=1 00:09:41.391 runtime=1 00:09:41.391 ioengine=libaio 00:09:41.391 direct=1 00:09:41.391 bs=4096 00:09:41.391 iodepth=1 00:09:41.391 norandommap=0 00:09:41.391 numjobs=1 00:09:41.391 00:09:41.391 verify_dump=1 00:09:41.391 verify_backlog=512 00:09:41.391 verify_state_save=0 00:09:41.391 do_verify=1 00:09:41.391 verify=crc32c-intel 00:09:41.391 [job0] 00:09:41.391 filename=/dev/nvme0n1 00:09:41.391 Could not set queue depth (nvme0n1) 00:09:41.651 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.651 fio-3.35 00:09:41.651 Starting 1 thread 00:09:43.024 00:09:43.024 job0: (groupid=0, jobs=1): err= 0: pid=68299: Mon Jul 15 19:47:36 2024 00:09:43.024 read: IOPS=2988, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1001msec) 00:09:43.024 slat (nsec): min=12637, max=67778, avg=17347.49, stdev=4931.91 00:09:43.024 clat (usec): min=139, max=287, avg=175.70, stdev=16.37 00:09:43.024 lat (usec): min=155, max=314, avg=193.05, stdev=18.13 00:09:43.024 clat percentiles (usec): 00:09:43.024 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:09:43.024 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 180], 00:09:43.024 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:09:43.024 | 99.00th=[ 223], 99.50th=[ 227], 99.90th=[ 255], 99.95th=[ 262], 00:09:43.024 | 99.99th=[ 289] 00:09:43.024 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:43.024 slat (nsec): min=17104, max=98684, avg=24469.57, stdev=7068.37 00:09:43.024 clat (usec): min=86, max=273, avg=109.03, stdev=13.67 00:09:43.024 lat (usec): min=105, max=372, avg=133.50, stdev=17.15 00:09:43.024 clat percentiles (usec): 00:09:43.024 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 98], 00:09:43.024 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:09:43.024 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 135], 00:09:43.024 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 180], 99.95th=[ 192], 00:09:43.024 | 99.99th=[ 273] 00:09:43.024 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:43.025 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:43.025 lat (usec) : 100=13.64%, 250=86.28%, 500=0.08% 00:09:43.025 cpu : usr=3.00%, sys=9.50%, ctx=6064, majf=0, minf=2 00:09:43.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.025 issued rwts: total=2991,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.025 00:09:43.025 Run status group 0 (all jobs): 00:09:43.025 READ: bw=11.7MiB/s (12.2MB/s), 11.7MiB/s-11.7MiB/s (12.2MB/s-12.2MB/s), io=11.7MiB (12.3MB), run=1001-1001msec 00:09:43.025 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:43.025 00:09:43.025 Disk stats (read/write): 00:09:43.025 nvme0n1: ios=2610/2920, merge=0/0, ticks=481/342, in_queue=823, util=91.38% 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.025 19:47:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:43.283 rmmod nvme_tcp 00:09:43.283 rmmod nvme_fabrics 00:09:43.283 rmmod nvme_keyring 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68202 ']' 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68202 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68202 ']' 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68202 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68202 00:09:43.283 killing process with pid 68202 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68202' 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68202 00:09:43.283 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68202 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:43.542 00:09:43.542 real 0m6.175s 00:09:43.542 user 0m19.660s 00:09:43.542 sys 0m2.395s 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.542 19:47:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.542 ************************************ 00:09:43.542 END TEST nvmf_nmic 00:09:43.542 ************************************ 00:09:43.542 19:47:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:43.542 19:47:37 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:43.542 19:47:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:43.542 19:47:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.542 19:47:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.542 ************************************ 00:09:43.542 START TEST nvmf_fio_target 00:09:43.542 ************************************ 00:09:43.542 19:47:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:43.800 * Looking for test storage... 00:09:43.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.800 19:47:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:43.801 Cannot find device "nvmf_tgt_br" 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.801 Cannot find device "nvmf_tgt_br2" 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:43.801 Cannot find device "nvmf_tgt_br" 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:43.801 Cannot find device "nvmf_tgt_br2" 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:43.801 19:47:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:43.801 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.801 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:43.801 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.801 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:43.801 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:43.801 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:43.801 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:44.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:44.059 00:09:44.059 --- 10.0.0.2 ping statistics --- 00:09:44.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.059 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:44.059 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:44.059 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.059 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:09:44.059 00:09:44.060 --- 10.0.0.3 ping statistics --- 00:09:44.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.060 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:09:44.060 00:09:44.060 --- 10.0.0.1 ping statistics --- 00:09:44.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.060 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68481 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68481 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68481 ']' 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.060 19:47:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.318 [2024-07-15 19:47:38.328787] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:09:44.318 [2024-07-15 19:47:38.328910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.318 [2024-07-15 19:47:38.468742] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.577 [2024-07-15 19:47:38.587002] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.577 [2024-07-15 19:47:38.587275] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.577 [2024-07-15 19:47:38.587406] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.577 [2024-07-15 19:47:38.587524] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.577 [2024-07-15 19:47:38.587541] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.577 [2024-07-15 19:47:38.587648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.577 [2024-07-15 19:47:38.587771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.577 [2024-07-15 19:47:38.588201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.577 [2024-07-15 19:47:38.588211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.577 [2024-07-15 19:47:38.645771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:45.511 19:47:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.511 19:47:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:45.511 19:47:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:45.511 19:47:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:45.511 19:47:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.511 19:47:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.511 19:47:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:45.511 [2024-07-15 19:47:39.738276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.769 19:47:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.026 19:47:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:46.026 19:47:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.284 19:47:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:46.284 19:47:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.543 19:47:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:46.543 19:47:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.801 19:47:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:46.801 19:47:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:47.063 19:47:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.378 19:47:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:47.378 19:47:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.943 19:47:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:47.943 19:47:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.943 19:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:47.943 19:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:48.507 19:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.507 19:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:48.507 19:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.764 19:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:48.764 19:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.032 19:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.289 [2024-07-15 19:47:43.483642] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.289 19:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:49.852 19:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:50.110 19:47:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid=f7fce926-7bf5-4841-86b1-6d78480abc2c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:50.110 19:47:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:50.110 19:47:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:50.110 19:47:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:50.110 19:47:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:50.110 19:47:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:50.110 19:47:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:52.634 19:47:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:52.634 19:47:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:52.634 19:47:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:52.634 19:47:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:52.634 19:47:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:52.634 19:47:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:52.634 19:47:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:52.634 [global] 00:09:52.634 thread=1 00:09:52.634 invalidate=1 00:09:52.634 rw=write 00:09:52.634 time_based=1 00:09:52.634 runtime=1 00:09:52.634 ioengine=libaio 00:09:52.634 direct=1 00:09:52.634 bs=4096 00:09:52.634 iodepth=1 00:09:52.634 norandommap=0 00:09:52.634 numjobs=1 00:09:52.634 00:09:52.634 verify_dump=1 00:09:52.634 verify_backlog=512 00:09:52.634 verify_state_save=0 00:09:52.634 do_verify=1 00:09:52.634 verify=crc32c-intel 00:09:52.634 [job0] 00:09:52.634 filename=/dev/nvme0n1 00:09:52.634 [job1] 00:09:52.634 filename=/dev/nvme0n2 00:09:52.634 [job2] 00:09:52.634 filename=/dev/nvme0n3 00:09:52.634 [job3] 00:09:52.634 filename=/dev/nvme0n4 00:09:52.634 Could not set queue depth (nvme0n1) 00:09:52.634 Could not set queue depth (nvme0n2) 00:09:52.634 Could not set queue depth (nvme0n3) 00:09:52.634 Could not set queue depth (nvme0n4) 00:09:52.634 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.634 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.634 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.634 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.634 fio-3.35 00:09:52.634 Starting 4 threads 00:09:53.582 00:09:53.582 job0: (groupid=0, jobs=1): err= 0: pid=68671: Mon Jul 15 19:47:47 2024 00:09:53.582 read: IOPS=1368, BW=5475KiB/s (5606kB/s)(5480KiB/1001msec) 00:09:53.582 slat (nsec): min=9260, max=77117, avg=19744.98, stdev=7244.98 00:09:53.582 clat (usec): min=237, max=900, avg=363.69, stdev=56.41 00:09:53.582 lat (usec): min=256, max=917, avg=383.44, stdev=56.88 00:09:53.582 clat percentiles (usec): 00:09:53.582 | 1.00th=[ 265], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 318], 00:09:53.582 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 371], 00:09:53.582 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 437], 95.00th=[ 461], 00:09:53.582 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 898], 99.95th=[ 898], 00:09:53.582 | 99.99th=[ 898] 00:09:53.582 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:53.582 slat (nsec): min=12487, max=96865, avg=26036.34, stdev=8376.67 00:09:53.582 clat (usec): min=105, max=2892, avg=278.72, stdev=85.20 00:09:53.582 lat (usec): min=126, max=2921, avg=304.76, stdev=85.46 00:09:53.582 clat percentiles (usec): 00:09:53.582 | 1.00th=[ 163], 5.00th=[ 204], 10.00th=[ 223], 20.00th=[ 239], 00:09:53.582 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:09:53.582 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[ 355], 00:09:53.582 | 99.00th=[ 412], 99.50th=[ 449], 99.90th=[ 947], 99.95th=[ 2900], 00:09:53.582 | 99.99th=[ 2900] 00:09:53.582 bw ( KiB/s): min= 7608, max= 7608, per=26.56%, avg=7608.00, stdev= 0.00, samples=1 00:09:53.582 iops : min= 1902, max= 1902, avg=1902.00, stdev= 0.00, samples=1 00:09:53.582 lat (usec) : 250=15.55%, 500=83.55%, 750=0.76%, 1000=0.10% 00:09:53.582 lat (msec) : 4=0.03% 00:09:53.582 cpu : usr=1.50%, sys=5.70%, ctx=2907, majf=0, minf=11 00:09:53.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.582 issued rwts: total=1370,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.582 job1: (groupid=0, jobs=1): err= 0: pid=68672: Mon Jul 15 19:47:47 2024 00:09:53.582 read: IOPS=1362, BW=5451KiB/s (5581kB/s)(5456KiB/1001msec) 00:09:53.582 slat (nsec): min=9693, max=56831, avg=19914.57, stdev=6180.68 00:09:53.582 clat (usec): min=238, max=942, avg=362.88, stdev=54.54 00:09:53.582 lat (usec): min=252, max=954, avg=382.79, stdev=55.63 00:09:53.582 clat percentiles (usec): 00:09:53.582 | 1.00th=[ 265], 5.00th=[ 285], 10.00th=[ 302], 20.00th=[ 322], 00:09:53.582 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 371], 00:09:53.582 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 457], 00:09:53.582 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 906], 99.95th=[ 947], 00:09:53.582 | 99.99th=[ 947] 00:09:53.582 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:53.582 slat (nsec): min=11218, max=90974, avg=27038.17, stdev=8795.64 00:09:53.582 clat (usec): min=124, max=914, avg=279.69, stdev=59.41 00:09:53.582 lat (usec): min=145, max=954, avg=306.72, stdev=60.68 00:09:53.582 clat percentiles (usec): 00:09:53.582 | 1.00th=[ 178], 5.00th=[ 208], 10.00th=[ 225], 20.00th=[ 239], 00:09:53.582 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 285], 00:09:53.582 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 363], 00:09:53.582 | 99.00th=[ 449], 99.50th=[ 619], 99.90th=[ 783], 99.95th=[ 914], 00:09:53.582 | 99.99th=[ 914] 00:09:53.582 bw ( KiB/s): min= 7512, max= 7512, per=26.23%, avg=7512.00, stdev= 0.00, samples=1 00:09:53.582 iops : min= 1878, max= 1878, avg=1878.00, stdev= 0.00, samples=1 00:09:53.582 lat (usec) : 250=15.14%, 500=83.90%, 750=0.83%, 1000=0.14% 00:09:53.582 cpu : usr=2.30%, sys=5.10%, ctx=2901, majf=0, minf=9 00:09:53.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.582 issued rwts: total=1364,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.582 job2: (groupid=0, jobs=1): err= 0: pid=68673: Mon Jul 15 19:47:47 2024 00:09:53.582 read: IOPS=1939, BW=7756KiB/s (7942kB/s)(7764KiB/1001msec) 00:09:53.582 slat (nsec): min=14615, max=82936, avg=24263.31, stdev=5301.97 00:09:53.582 clat (usec): min=193, max=1053, avg=253.95, stdev=32.90 00:09:53.582 lat (usec): min=211, max=1072, avg=278.21, stdev=33.82 00:09:53.582 clat percentiles (usec): 00:09:53.582 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 231], 00:09:53.582 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:09:53.582 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 302], 00:09:53.582 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 424], 99.95th=[ 1057], 00:09:53.582 | 99.99th=[ 1057] 00:09:53.582 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:53.582 slat (nsec): min=17827, max=78317, avg=29986.56, stdev=8899.61 00:09:53.582 clat (usec): min=124, max=717, avg=189.60, stdev=32.05 00:09:53.582 lat (usec): min=145, max=760, avg=219.59, stdev=35.19 00:09:53.582 clat percentiles (usec): 00:09:53.582 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 167], 00:09:53.582 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:09:53.582 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 235], 00:09:53.582 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 445], 99.95th=[ 701], 00:09:53.582 | 99.99th=[ 717] 00:09:53.582 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:09:53.582 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:53.582 lat (usec) : 250=74.23%, 500=25.70%, 750=0.05% 00:09:53.582 lat (msec) : 2=0.03% 00:09:53.582 cpu : usr=2.10%, sys=8.70%, ctx=3995, majf=0, minf=3 00:09:53.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.582 issued rwts: total=1941,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.582 job3: (groupid=0, jobs=1): err= 0: pid=68674: Mon Jul 15 19:47:47 2024 00:09:53.582 read: IOPS=1889, BW=7556KiB/s (7738kB/s)(7564KiB/1001msec) 00:09:53.582 slat (nsec): min=12960, max=48634, avg=18031.22, stdev=3890.90 00:09:53.582 clat (usec): min=193, max=407, avg=251.05, stdev=27.12 00:09:53.582 lat (usec): min=208, max=423, avg=269.08, stdev=28.12 00:09:53.582 clat percentiles (usec): 00:09:53.582 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:09:53.582 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:09:53.582 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 302], 00:09:53.582 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 396], 99.95th=[ 408], 00:09:53.582 | 99.99th=[ 408] 00:09:53.582 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:53.582 slat (nsec): min=14679, max=88553, avg=27244.48, stdev=8394.93 00:09:53.582 clat (usec): min=139, max=636, avg=208.38, stdev=36.66 00:09:53.582 lat (usec): min=162, max=666, avg=235.63, stdev=39.63 00:09:53.583 clat percentiles (usec): 00:09:53.583 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 184], 00:09:53.583 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 208], 00:09:53.583 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 253], 95.00th=[ 277], 00:09:53.583 | 99.00th=[ 318], 99.50th=[ 359], 99.90th=[ 545], 99.95th=[ 586], 00:09:53.583 | 99.99th=[ 635] 00:09:53.583 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:09:53.583 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:53.583 lat (usec) : 250=72.63%, 500=27.29%, 750=0.08% 00:09:53.583 cpu : usr=2.10%, sys=6.80%, ctx=3940, majf=0, minf=12 00:09:53.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.583 issued rwts: total=1891,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.583 00:09:53.583 Run status group 0 (all jobs): 00:09:53.583 READ: bw=25.6MiB/s (26.9MB/s), 5451KiB/s-7756KiB/s (5581kB/s-7942kB/s), io=25.6MiB (26.9MB), run=1001-1001msec 00:09:53.583 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:09:53.583 00:09:53.583 Disk stats (read/write): 00:09:53.583 nvme0n1: ios=1074/1500, merge=0/0, ticks=393/399, in_queue=792, util=88.08% 00:09:53.583 nvme0n2: ios=1051/1489, merge=0/0, ticks=387/401, in_queue=788, util=88.32% 00:09:53.583 nvme0n3: ios=1536/1938, merge=0/0, ticks=407/384, in_queue=791, util=89.35% 00:09:53.583 nvme0n4: ios=1536/1875, merge=0/0, ticks=381/416, in_queue=797, util=89.81% 00:09:53.583 19:47:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:53.583 [global] 00:09:53.583 thread=1 00:09:53.583 invalidate=1 00:09:53.583 rw=randwrite 00:09:53.583 time_based=1 00:09:53.583 runtime=1 00:09:53.583 ioengine=libaio 00:09:53.583 direct=1 00:09:53.583 bs=4096 00:09:53.583 iodepth=1 00:09:53.583 norandommap=0 00:09:53.583 numjobs=1 00:09:53.583 00:09:53.583 verify_dump=1 00:09:53.583 verify_backlog=512 00:09:53.583 verify_state_save=0 00:09:53.583 do_verify=1 00:09:53.583 verify=crc32c-intel 00:09:53.583 [job0] 00:09:53.583 filename=/dev/nvme0n1 00:09:53.583 [job1] 00:09:53.583 filename=/dev/nvme0n2 00:09:53.583 [job2] 00:09:53.583 filename=/dev/nvme0n3 00:09:53.583 [job3] 00:09:53.583 filename=/dev/nvme0n4 00:09:53.583 Could not set queue depth (nvme0n1) 00:09:53.583 Could not set queue depth (nvme0n2) 00:09:53.583 Could not set queue depth (nvme0n3) 00:09:53.583 Could not set queue depth (nvme0n4) 00:09:53.841 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.841 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.841 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.841 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.841 fio-3.35 00:09:53.841 Starting 4 threads 00:09:54.798 00:09:54.798 job0: (groupid=0, jobs=1): err= 0: pid=68739: Mon Jul 15 19:47:49 2024 00:09:54.798 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:54.798 slat (nsec): min=11805, max=54695, avg=16564.23, stdev=4370.60 00:09:54.798 clat (usec): min=167, max=816, avg=247.02, stdev=39.40 00:09:54.798 lat (usec): min=181, max=833, avg=263.59, stdev=39.87 00:09:54.798 clat percentiles (usec): 00:09:54.798 | 1.00th=[ 180], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 215], 00:09:54.798 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:09:54.798 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 314], 00:09:54.798 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 529], 99.95th=[ 570], 00:09:54.798 | 99.99th=[ 816] 00:09:54.798 write: IOPS=2173, BW=8695KiB/s (8904kB/s)(8704KiB/1001msec); 0 zone resets 00:09:54.798 slat (usec): min=19, max=177, avg=25.56, stdev= 7.26 00:09:54.798 clat (usec): min=107, max=693, avg=182.23, stdev=38.46 00:09:54.798 lat (usec): min=128, max=714, avg=207.78, stdev=40.66 00:09:54.798 clat percentiles (usec): 00:09:54.798 | 1.00th=[ 120], 5.00th=[ 130], 10.00th=[ 139], 20.00th=[ 151], 00:09:54.798 | 30.00th=[ 159], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 188], 00:09:54.798 | 70.00th=[ 200], 80.00th=[ 212], 90.00th=[ 233], 95.00th=[ 251], 00:09:54.798 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 537], 00:09:54.798 | 99.99th=[ 693] 00:09:54.798 bw ( KiB/s): min= 8192, max= 8192, per=29.28%, avg=8192.00, stdev= 0.00, samples=1 00:09:54.798 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:54.798 lat (usec) : 250=75.73%, 500=24.15%, 750=0.09%, 1000=0.02% 00:09:54.798 cpu : usr=1.90%, sys=7.10%, ctx=4225, majf=0, minf=13 00:09:54.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.798 issued rwts: total=2048,2176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.798 job1: (groupid=0, jobs=1): err= 0: pid=68740: Mon Jul 15 19:47:49 2024 00:09:54.798 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:54.798 slat (nsec): min=11282, max=63544, avg=15663.14, stdev=4200.94 00:09:54.798 clat (usec): min=159, max=933, avg=241.22, stdev=41.49 00:09:54.798 lat (usec): min=173, max=945, avg=256.88, stdev=42.19 00:09:54.798 clat percentiles (usec): 00:09:54.798 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 206], 00:09:54.798 | 30.00th=[ 217], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 251], 00:09:54.798 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:09:54.798 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 408], 99.95th=[ 469], 00:09:54.798 | 99.99th=[ 930] 00:09:54.798 write: IOPS=2290, BW=9163KiB/s (9383kB/s)(9172KiB/1001msec); 0 zone resets 00:09:54.798 slat (usec): min=13, max=149, avg=23.60, stdev= 7.24 00:09:54.798 clat (usec): min=99, max=611, avg=179.43, stdev=37.62 00:09:54.798 lat (usec): min=119, max=628, avg=203.04, stdev=39.82 00:09:54.798 clat percentiles (usec): 00:09:54.798 | 1.00th=[ 115], 5.00th=[ 127], 10.00th=[ 137], 20.00th=[ 147], 00:09:54.798 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 186], 00:09:54.798 | 70.00th=[ 196], 80.00th=[ 208], 90.00th=[ 231], 95.00th=[ 249], 00:09:54.798 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 343], 99.95th=[ 404], 00:09:54.798 | 99.99th=[ 611] 00:09:54.798 bw ( KiB/s): min= 8192, max= 8192, per=29.28%, avg=8192.00, stdev= 0.00, samples=1 00:09:54.798 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:54.798 lat (usec) : 100=0.02%, 250=78.88%, 500=21.06%, 750=0.02%, 1000=0.02% 00:09:54.798 cpu : usr=2.20%, sys=6.60%, ctx=4341, majf=0, minf=5 00:09:54.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.798 issued rwts: total=2048,2293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.798 job2: (groupid=0, jobs=1): err= 0: pid=68741: Mon Jul 15 19:47:49 2024 00:09:54.798 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:54.798 slat (nsec): min=12706, max=99801, avg=27312.50, stdev=9401.68 00:09:54.798 clat (usec): min=232, max=2925, avg=459.14, stdev=155.48 00:09:54.798 lat (usec): min=252, max=2962, avg=486.46, stdev=159.10 00:09:54.798 clat percentiles (usec): 00:09:54.798 | 1.00th=[ 277], 5.00th=[ 322], 10.00th=[ 343], 20.00th=[ 367], 00:09:54.798 | 30.00th=[ 392], 40.00th=[ 408], 50.00th=[ 429], 60.00th=[ 449], 00:09:54.798 | 70.00th=[ 486], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 668], 00:09:54.798 | 99.00th=[ 889], 99.50th=[ 1037], 99.90th=[ 2573], 99.95th=[ 2933], 00:09:54.798 | 99.99th=[ 2933] 00:09:54.798 write: IOPS=1488, BW=5954KiB/s (6097kB/s)(5960KiB/1001msec); 0 zone resets 00:09:54.798 slat (usec): min=21, max=203, avg=39.43, stdev=10.15 00:09:54.798 clat (usec): min=142, max=582, avg=291.35, stdev=77.26 00:09:54.798 lat (usec): min=168, max=666, avg=330.78, stdev=79.92 00:09:54.798 clat percentiles (usec): 00:09:54.798 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 192], 20.00th=[ 223], 00:09:54.798 | 30.00th=[ 245], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 302], 00:09:54.798 | 70.00th=[ 330], 80.00th=[ 363], 90.00th=[ 404], 95.00th=[ 429], 00:09:54.798 | 99.00th=[ 474], 99.50th=[ 498], 99.90th=[ 570], 99.95th=[ 586], 00:09:54.798 | 99.99th=[ 586] 00:09:54.798 bw ( KiB/s): min= 6928, max= 6928, per=24.76%, avg=6928.00, stdev= 0.00, samples=1 00:09:54.798 iops : min= 1732, max= 1732, avg=1732.00, stdev= 0.00, samples=1 00:09:54.798 lat (usec) : 250=19.29%, 500=69.65%, 750=9.63%, 1000=1.11% 00:09:54.798 lat (msec) : 2=0.24%, 4=0.08% 00:09:54.798 cpu : usr=2.50%, sys=6.50%, ctx=2520, majf=0, minf=18 00:09:54.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.798 issued rwts: total=1024,1490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.798 job3: (groupid=0, jobs=1): err= 0: pid=68742: Mon Jul 15 19:47:49 2024 00:09:54.798 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:54.798 slat (nsec): min=16458, max=95134, avg=35845.79, stdev=12318.42 00:09:54.798 clat (usec): min=237, max=2133, avg=512.03, stdev=129.83 00:09:54.798 lat (usec): min=258, max=2191, avg=547.88, stdev=137.25 00:09:54.798 clat percentiles (usec): 00:09:54.798 | 1.00th=[ 289], 5.00th=[ 347], 10.00th=[ 371], 20.00th=[ 400], 00:09:54.798 | 30.00th=[ 424], 40.00th=[ 457], 50.00th=[ 498], 60.00th=[ 545], 00:09:54.798 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 676], 95.00th=[ 717], 00:09:54.798 | 99.00th=[ 791], 99.50th=[ 840], 99.90th=[ 1012], 99.95th=[ 2147], 00:09:54.798 | 99.99th=[ 2147] 00:09:54.798 write: IOPS=1040, BW=4164KiB/s (4264kB/s)(4168KiB/1001msec); 0 zone resets 00:09:54.798 slat (usec): min=33, max=264, avg=45.96, stdev=11.88 00:09:54.798 clat (usec): min=156, max=773, avg=366.83, stdev=92.81 00:09:54.798 lat (usec): min=196, max=817, avg=412.79, stdev=96.18 00:09:54.798 clat percentiles (usec): 00:09:54.798 | 1.00th=[ 180], 5.00th=[ 245], 10.00th=[ 269], 20.00th=[ 285], 00:09:54.798 | 30.00th=[ 306], 40.00th=[ 330], 50.00th=[ 359], 60.00th=[ 379], 00:09:54.798 | 70.00th=[ 408], 80.00th=[ 449], 90.00th=[ 494], 95.00th=[ 537], 00:09:54.798 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 660], 99.95th=[ 775], 00:09:54.798 | 99.99th=[ 775] 00:09:54.798 bw ( KiB/s): min= 4968, max= 4968, per=17.76%, avg=4968.00, stdev= 0.00, samples=1 00:09:54.798 iops : min= 1242, max= 1242, avg=1242.00, stdev= 0.00, samples=1 00:09:54.798 lat (usec) : 250=2.90%, 500=67.72%, 750=28.12%, 1000=1.16% 00:09:54.798 lat (msec) : 2=0.05%, 4=0.05% 00:09:54.798 cpu : usr=1.80%, sys=6.90%, ctx=2067, majf=0, minf=9 00:09:54.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.798 issued rwts: total=1024,1042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.798 00:09:54.798 Run status group 0 (all jobs): 00:09:54.798 READ: bw=24.0MiB/s (25.1MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:09:54.798 WRITE: bw=27.3MiB/s (28.6MB/s), 4164KiB/s-9163KiB/s (4264kB/s-9383kB/s), io=27.3MiB (28.7MB), run=1001-1001msec 00:09:54.798 00:09:54.798 Disk stats (read/write): 00:09:54.798 nvme0n1: ios=1674/2048, merge=0/0, ticks=443/395, in_queue=838, util=88.15% 00:09:54.798 nvme0n2: ios=1721/2048, merge=0/0, ticks=464/386, in_queue=850, util=88.92% 00:09:54.798 nvme0n3: ios=1041/1094, merge=0/0, ticks=481/324, in_queue=805, util=89.36% 00:09:54.798 nvme0n4: ios=840/1024, merge=0/0, ticks=407/392, in_queue=799, util=89.82% 00:09:55.056 19:47:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:55.056 [global] 00:09:55.056 thread=1 00:09:55.056 invalidate=1 00:09:55.056 rw=write 00:09:55.056 time_based=1 00:09:55.056 runtime=1 00:09:55.056 ioengine=libaio 00:09:55.056 direct=1 00:09:55.056 bs=4096 00:09:55.056 iodepth=128 00:09:55.056 norandommap=0 00:09:55.056 numjobs=1 00:09:55.056 00:09:55.056 verify_dump=1 00:09:55.056 verify_backlog=512 00:09:55.056 verify_state_save=0 00:09:55.056 do_verify=1 00:09:55.056 verify=crc32c-intel 00:09:55.056 [job0] 00:09:55.056 filename=/dev/nvme0n1 00:09:55.056 [job1] 00:09:55.056 filename=/dev/nvme0n2 00:09:55.056 [job2] 00:09:55.056 filename=/dev/nvme0n3 00:09:55.056 [job3] 00:09:55.056 filename=/dev/nvme0n4 00:09:55.056 Could not set queue depth (nvme0n1) 00:09:55.056 Could not set queue depth (nvme0n2) 00:09:55.056 Could not set queue depth (nvme0n3) 00:09:55.056 Could not set queue depth (nvme0n4) 00:09:55.056 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.056 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.056 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.056 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.056 fio-3.35 00:09:55.056 Starting 4 threads 00:09:56.428 00:09:56.428 job0: (groupid=0, jobs=1): err= 0: pid=68796: Mon Jul 15 19:47:50 2024 00:09:56.428 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:56.428 slat (usec): min=5, max=4240, avg=117.38, stdev=559.83 00:09:56.428 clat (usec): min=11062, max=18245, avg=15659.74, stdev=1059.01 00:09:56.428 lat (usec): min=11677, max=18261, avg=15777.12, stdev=912.95 00:09:56.428 clat percentiles (usec): 00:09:56.428 | 1.00th=[11994], 5.00th=[14222], 10.00th=[14615], 20.00th=[15008], 00:09:56.428 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:09:56.428 | 70.00th=[16057], 80.00th=[16581], 90.00th=[16909], 95.00th=[17433], 00:09:56.428 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:09:56.428 | 99.99th=[18220] 00:09:56.428 write: IOPS=4148, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1003msec); 0 zone resets 00:09:56.428 slat (usec): min=12, max=3729, avg=115.94, stdev=499.67 00:09:56.428 clat (usec): min=286, max=17292, avg=14987.48, stdev=1493.67 00:09:56.428 lat (usec): min=3203, max=17335, avg=15103.42, stdev=1405.49 00:09:56.428 clat percentiles (usec): 00:09:56.428 | 1.00th=[ 7898], 5.00th=[13435], 10.00th=[14091], 20.00th=[14353], 00:09:56.428 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:09:56.428 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16319], 95.00th=[16581], 00:09:56.428 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 00:09:56.428 | 99.99th=[17171] 00:09:56.428 bw ( KiB/s): min=16384, max=16384, per=36.13%, avg=16384.00, stdev= 0.00, samples=2 00:09:56.428 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:56.428 lat (usec) : 500=0.01% 00:09:56.428 lat (msec) : 4=0.22%, 10=0.56%, 20=99.21% 00:09:56.428 cpu : usr=4.89%, sys=12.77%, ctx=262, majf=0, minf=1 00:09:56.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:56.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.428 issued rwts: total=4096,4161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.428 job1: (groupid=0, jobs=1): err= 0: pid=68797: Mon Jul 15 19:47:50 2024 00:09:56.428 read: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec) 00:09:56.428 slat (usec): min=9, max=16890, avg=328.69, stdev=1469.91 00:09:56.428 clat (usec): min=24392, max=77671, avg=39441.31, stdev=8582.99 00:09:56.428 lat (usec): min=29294, max=77695, avg=39770.00, stdev=8742.92 00:09:56.428 clat percentiles (usec): 00:09:56.428 | 1.00th=[30540], 5.00th=[31589], 10.00th=[32900], 20.00th=[33424], 00:09:56.428 | 30.00th=[33817], 40.00th=[34866], 50.00th=[35390], 60.00th=[38536], 00:09:56.428 | 70.00th=[41681], 80.00th=[46400], 90.00th=[49546], 95.00th=[56886], 00:09:56.428 | 99.00th=[71828], 99.50th=[76022], 99.90th=[76022], 99.95th=[78119], 00:09:56.428 | 99.99th=[78119] 00:09:56.428 write: IOPS=1489, BW=5956KiB/s (6099kB/s)(5992KiB/1006msec); 0 zone resets 00:09:56.428 slat (usec): min=14, max=10961, avg=435.34, stdev=1531.83 00:09:56.428 clat (msec): min=2, max=103, avg=57.32, stdev=23.30 00:09:56.428 lat (msec): min=5, max=103, avg=57.75, stdev=23.43 00:09:56.428 clat percentiles (msec): 00:09:56.428 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:09:56.428 | 30.00th=[ 35], 40.00th=[ 46], 50.00th=[ 62], 60.00th=[ 64], 00:09:56.428 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 91], 95.00th=[ 97], 00:09:56.428 | 99.00th=[ 103], 99.50th=[ 103], 99.90th=[ 104], 99.95th=[ 104], 00:09:56.428 | 99.99th=[ 104] 00:09:56.428 bw ( KiB/s): min= 4672, max= 6288, per=12.08%, avg=5480.00, stdev=1142.68, samples=2 00:09:56.428 iops : min= 1168, max= 1572, avg=1370.00, stdev=285.67, samples=2 00:09:56.428 lat (msec) : 4=0.04%, 10=0.63%, 20=0.63%, 50=61.46%, 100=36.12% 00:09:56.428 lat (msec) : 250=1.11% 00:09:56.428 cpu : usr=1.49%, sys=5.07%, ctx=183, majf=0, minf=5 00:09:56.428 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:09:56.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.428 issued rwts: total=1024,1498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.428 job2: (groupid=0, jobs=1): err= 0: pid=68798: Mon Jul 15 19:47:50 2024 00:09:56.428 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:09:56.428 slat (usec): min=7, max=4630, avg=132.27, stdev=644.91 00:09:56.428 clat (usec): min=11963, max=19646, avg=17544.92, stdev=1030.71 00:09:56.429 lat (usec): min=15248, max=19672, avg=17677.19, stdev=816.03 00:09:56.429 clat percentiles (usec): 00:09:56.429 | 1.00th=[13566], 5.00th=[15795], 10.00th=[16450], 20.00th=[16909], 00:09:56.429 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:09:56.429 | 70.00th=[18220], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:09:56.429 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:09:56.429 | 99.99th=[19530] 00:09:56.429 write: IOPS=3686, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1003msec); 0 zone resets 00:09:56.429 slat (usec): min=11, max=5395, avg=133.25, stdev=601.82 00:09:56.429 clat (usec): min=2177, max=19960, avg=17162.67, stdev=1818.61 00:09:56.429 lat (usec): min=2203, max=19984, avg=17295.92, stdev=1721.13 00:09:56.429 clat percentiles (usec): 00:09:56.429 | 1.00th=[ 6783], 5.00th=[14484], 10.00th=[16319], 20.00th=[16909], 00:09:56.429 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:09:56.429 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:09:56.429 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:09:56.429 | 99.99th=[20055] 00:09:56.429 bw ( KiB/s): min=12808, max=15864, per=31.61%, avg=14336.00, stdev=2160.92, samples=2 00:09:56.429 iops : min= 3202, max= 3966, avg=3584.00, stdev=540.23, samples=2 00:09:56.429 lat (msec) : 4=0.25%, 10=0.44%, 20=99.31% 00:09:56.429 cpu : usr=4.09%, sys=11.28%, ctx=228, majf=0, minf=1 00:09:56.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:56.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.429 issued rwts: total=3584,3698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.429 job3: (groupid=0, jobs=1): err= 0: pid=68799: Mon Jul 15 19:47:50 2024 00:09:56.429 read: IOPS=1824, BW=7300KiB/s (7475kB/s)(7336KiB/1005msec) 00:09:56.429 slat (usec): min=7, max=11666, avg=272.10, stdev=1442.71 00:09:56.429 clat (usec): min=3952, max=48055, avg=33319.80, stdev=7554.58 00:09:56.429 lat (usec): min=3968, max=48075, avg=33591.90, stdev=7479.25 00:09:56.429 clat percentiles (usec): 00:09:56.429 | 1.00th=[11076], 5.00th=[23725], 10.00th=[27395], 20.00th=[29754], 00:09:56.429 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31327], 60.00th=[32113], 00:09:56.429 | 70.00th=[32900], 80.00th=[42206], 90.00th=[45351], 95.00th=[47449], 00:09:56.429 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:09:56.429 | 99.99th=[47973] 00:09:56.429 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:09:56.429 slat (usec): min=11, max=11048, avg=235.61, stdev=1206.72 00:09:56.429 clat (usec): min=19094, max=45405, avg=31373.83, stdev=6922.11 00:09:56.429 lat (usec): min=24342, max=45434, avg=31609.44, stdev=6851.87 00:09:56.429 clat percentiles (usec): 00:09:56.429 | 1.00th=[20579], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:09:56.429 | 30.00th=[26084], 40.00th=[26608], 50.00th=[28705], 60.00th=[31589], 00:09:56.429 | 70.00th=[33817], 80.00th=[38536], 90.00th=[43254], 95.00th=[43779], 00:09:56.429 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:09:56.429 | 99.99th=[45351] 00:09:56.429 bw ( KiB/s): min= 8175, max= 8192, per=18.04%, avg=8183.50, stdev=12.02, samples=2 00:09:56.429 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:09:56.429 lat (msec) : 4=0.08%, 10=0.18%, 20=2.01%, 50=97.73% 00:09:56.429 cpu : usr=2.39%, sys=6.37%, ctx=124, majf=0, minf=6 00:09:56.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:56.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.429 issued rwts: total=1834,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.429 00:09:56.429 Run status group 0 (all jobs): 00:09:56.429 READ: bw=40.9MiB/s (42.9MB/s), 4072KiB/s-16.0MiB/s (4169kB/s-16.7MB/s), io=41.2MiB (43.2MB), run=1003-1006msec 00:09:56.429 WRITE: bw=44.3MiB/s (46.4MB/s), 5956KiB/s-16.2MiB/s (6099kB/s-17.0MB/s), io=44.6MiB (46.7MB), run=1003-1006msec 00:09:56.429 00:09:56.429 Disk stats (read/write): 00:09:56.429 nvme0n1: ios=3506/3584, merge=0/0, ticks=12290/11706, in_queue=23996, util=87.68% 00:09:56.429 nvme0n2: ios=1071/1215, merge=0/0, ticks=13805/20474, in_queue=34279, util=88.74% 00:09:56.429 nvme0n3: ios=3072/3168, merge=0/0, ticks=12172/12282, in_queue=24454, util=89.11% 00:09:56.429 nvme0n4: ios=1536/1664, merge=0/0, ticks=13235/12387, in_queue=25622, util=89.77% 00:09:56.429 19:47:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:56.429 [global] 00:09:56.429 thread=1 00:09:56.429 invalidate=1 00:09:56.429 rw=randwrite 00:09:56.429 time_based=1 00:09:56.429 runtime=1 00:09:56.429 ioengine=libaio 00:09:56.429 direct=1 00:09:56.429 bs=4096 00:09:56.429 iodepth=128 00:09:56.429 norandommap=0 00:09:56.429 numjobs=1 00:09:56.429 00:09:56.429 verify_dump=1 00:09:56.429 verify_backlog=512 00:09:56.429 verify_state_save=0 00:09:56.429 do_verify=1 00:09:56.429 verify=crc32c-intel 00:09:56.429 [job0] 00:09:56.429 filename=/dev/nvme0n1 00:09:56.429 [job1] 00:09:56.429 filename=/dev/nvme0n2 00:09:56.429 [job2] 00:09:56.429 filename=/dev/nvme0n3 00:09:56.429 [job3] 00:09:56.429 filename=/dev/nvme0n4 00:09:56.429 Could not set queue depth (nvme0n1) 00:09:56.429 Could not set queue depth (nvme0n2) 00:09:56.429 Could not set queue depth (nvme0n3) 00:09:56.429 Could not set queue depth (nvme0n4) 00:09:56.429 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.429 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.429 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.429 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.429 fio-3.35 00:09:56.429 Starting 4 threads 00:09:57.804 00:09:57.804 job0: (groupid=0, jobs=1): err= 0: pid=68852: Mon Jul 15 19:47:51 2024 00:09:57.804 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:09:57.804 slat (usec): min=8, max=9818, avg=125.95, stdev=823.38 00:09:57.804 clat (usec): min=9177, max=30939, avg=17479.90, stdev=2665.86 00:09:57.804 lat (usec): min=9197, max=37234, avg=17605.85, stdev=2704.89 00:09:57.804 clat percentiles (usec): 00:09:57.804 | 1.00th=[10552], 5.00th=[13960], 10.00th=[14615], 20.00th=[15533], 00:09:57.804 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17433], 60.00th=[17957], 00:09:57.804 | 70.00th=[18744], 80.00th=[19530], 90.00th=[20055], 95.00th=[22676], 00:09:57.804 | 99.00th=[26084], 99.50th=[27395], 99.90th=[30802], 99.95th=[30802], 00:09:57.804 | 99.99th=[31065] 00:09:57.804 write: IOPS=4086, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:09:57.804 slat (usec): min=9, max=12494, avg=126.06, stdev=787.41 00:09:57.804 clat (usec): min=934, max=25143, avg=15689.46, stdev=2116.45 00:09:57.804 lat (usec): min=8440, max=25408, avg=15815.52, stdev=2001.71 00:09:57.804 clat percentiles (usec): 00:09:57.804 | 1.00th=[ 9110], 5.00th=[12911], 10.00th=[13435], 20.00th=[14353], 00:09:57.804 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15795], 60.00th=[16188], 00:09:57.804 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17957], 95.00th=[18744], 00:09:57.804 | 99.00th=[21627], 99.50th=[21627], 99.90th=[25035], 99.95th=[25035], 00:09:57.804 | 99.99th=[25035] 00:09:57.804 bw ( KiB/s): min=15848, max=15895, per=34.38%, avg=15871.50, stdev=33.23, samples=2 00:09:57.804 iops : min= 3962, max= 3973, avg=3967.50, stdev= 7.78, samples=2 00:09:57.804 lat (usec) : 1000=0.01% 00:09:57.804 lat (msec) : 10=1.46%, 20=91.67%, 50=6.86% 00:09:57.804 cpu : usr=3.80%, sys=11.49%, ctx=165, majf=0, minf=10 00:09:57.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:57.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.804 issued rwts: total=3584,4095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.804 job1: (groupid=0, jobs=1): err= 0: pid=68853: Mon Jul 15 19:47:51 2024 00:09:57.804 read: IOPS=1574, BW=6297KiB/s (6448kB/s)(6328KiB/1005msec) 00:09:57.804 slat (usec): min=8, max=13963, avg=291.81, stdev=1264.49 00:09:57.804 clat (usec): min=964, max=50894, avg=35539.77, stdev=6785.54 00:09:57.804 lat (usec): min=4490, max=50910, avg=35831.58, stdev=6844.52 00:09:57.804 clat percentiles (usec): 00:09:57.804 | 1.00th=[ 6915], 5.00th=[25297], 10.00th=[31327], 20.00th=[33424], 00:09:57.804 | 30.00th=[34866], 40.00th=[34866], 50.00th=[35914], 60.00th=[36439], 00:09:57.804 | 70.00th=[36963], 80.00th=[38011], 90.00th=[43254], 95.00th=[45876], 00:09:57.804 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:09:57.804 | 99.99th=[51119] 00:09:57.804 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:09:57.804 slat (usec): min=5, max=16470, avg=256.41, stdev=1243.78 00:09:57.804 clat (usec): min=7475, max=50891, avg=33346.29, stdev=5479.25 00:09:57.805 lat (usec): min=7514, max=50913, avg=33602.70, stdev=5451.01 00:09:57.805 clat percentiles (usec): 00:09:57.805 | 1.00th=[19006], 5.00th=[22414], 10.00th=[25297], 20.00th=[30802], 00:09:57.805 | 30.00th=[32637], 40.00th=[33424], 50.00th=[33817], 60.00th=[34866], 00:09:57.805 | 70.00th=[35914], 80.00th=[36963], 90.00th=[38536], 95.00th=[42206], 00:09:57.805 | 99.00th=[47449], 99.50th=[48497], 99.90th=[49546], 99.95th=[50070], 00:09:57.805 | 99.99th=[51119] 00:09:57.805 bw ( KiB/s): min= 7520, max= 8192, per=17.02%, avg=7856.00, stdev=475.18, samples=2 00:09:57.805 iops : min= 1880, max= 2048, avg=1964.00, stdev=118.79, samples=2 00:09:57.805 lat (usec) : 1000=0.03% 00:09:57.805 lat (msec) : 10=1.29%, 20=0.94%, 50=97.30%, 100=0.44% 00:09:57.805 cpu : usr=1.69%, sys=6.47%, ctx=384, majf=0, minf=15 00:09:57.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:57.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.805 issued rwts: total=1582,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.805 job2: (groupid=0, jobs=1): err= 0: pid=68854: Mon Jul 15 19:47:51 2024 00:09:57.805 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:09:57.805 slat (usec): min=8, max=10696, avg=148.05, stdev=965.85 00:09:57.805 clat (usec): min=11659, max=32274, avg=20372.78, stdev=2326.71 00:09:57.805 lat (usec): min=11679, max=38257, avg=20520.83, stdev=2359.41 00:09:57.805 clat percentiles (usec): 00:09:57.805 | 1.00th=[12518], 5.00th=[17433], 10.00th=[18220], 20.00th=[19530], 00:09:57.805 | 30.00th=[20317], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:09:57.805 | 70.00th=[21103], 80.00th=[21365], 90.00th=[21890], 95.00th=[22414], 00:09:57.805 | 99.00th=[31065], 99.50th=[31065], 99.90th=[32113], 99.95th=[32375], 00:09:57.805 | 99.99th=[32375] 00:09:57.805 write: IOPS=3435, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1005msec); 0 zone resets 00:09:57.805 slat (usec): min=5, max=16960, avg=150.47, stdev=985.39 00:09:57.805 clat (usec): min=746, max=28166, avg=18752.82, stdev=2376.95 00:09:57.805 lat (usec): min=10932, max=28199, avg=18903.29, stdev=2227.01 00:09:57.805 clat percentiles (usec): 00:09:57.805 | 1.00th=[11207], 5.00th=[16188], 10.00th=[16581], 20.00th=[17171], 00:09:57.805 | 30.00th=[17957], 40.00th=[18482], 50.00th=[19006], 60.00th=[19268], 00:09:57.805 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20579], 95.00th=[21365], 00:09:57.805 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:09:57.805 | 99.99th=[28181] 00:09:57.805 bw ( KiB/s): min=13269, max=13304, per=28.78%, avg=13286.50, stdev=24.75, samples=2 00:09:57.805 iops : min= 3317, max= 3326, avg=3321.50, stdev= 6.36, samples=2 00:09:57.805 lat (usec) : 750=0.02% 00:09:57.805 lat (msec) : 10=0.05%, 20=54.59%, 50=45.35% 00:09:57.805 cpu : usr=3.59%, sys=10.36%, ctx=141, majf=0, minf=9 00:09:57.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:57.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.805 issued rwts: total=3072,3453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.805 job3: (groupid=0, jobs=1): err= 0: pid=68855: Mon Jul 15 19:47:51 2024 00:09:57.805 read: IOPS=1657, BW=6628KiB/s (6787kB/s)(6688KiB/1009msec) 00:09:57.805 slat (usec): min=6, max=17380, avg=284.78, stdev=1176.90 00:09:57.805 clat (usec): min=6799, max=53591, avg=35891.41, stdev=5841.25 00:09:57.805 lat (usec): min=12703, max=55466, avg=36176.19, stdev=5845.58 00:09:57.805 clat percentiles (usec): 00:09:57.805 | 1.00th=[18220], 5.00th=[27657], 10.00th=[30278], 20.00th=[32637], 00:09:57.805 | 30.00th=[34341], 40.00th=[34866], 50.00th=[35914], 60.00th=[36439], 00:09:57.805 | 70.00th=[36439], 80.00th=[38011], 90.00th=[43254], 95.00th=[47449], 00:09:57.805 | 99.00th=[50594], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:09:57.805 | 99.99th=[53740] 00:09:57.805 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:09:57.805 slat (usec): min=5, max=16513, avg=248.36, stdev=1203.98 00:09:57.805 clat (usec): min=14660, max=53415, avg=33118.65, stdev=6226.16 00:09:57.805 lat (usec): min=15385, max=53425, avg=33367.01, stdev=6200.97 00:09:57.805 clat percentiles (usec): 00:09:57.805 | 1.00th=[16450], 5.00th=[20055], 10.00th=[24249], 20.00th=[27919], 00:09:57.805 | 30.00th=[31065], 40.00th=[32900], 50.00th=[33817], 60.00th=[34866], 00:09:57.805 | 70.00th=[36963], 80.00th=[38011], 90.00th=[40109], 95.00th=[42206], 00:09:57.805 | 99.00th=[46924], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:09:57.805 | 99.99th=[53216] 00:09:57.805 bw ( KiB/s): min= 8175, max= 8192, per=17.73%, avg=8183.50, stdev=12.02, samples=2 00:09:57.805 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:09:57.805 lat (msec) : 10=0.03%, 20=3.55%, 50=95.83%, 100=0.59% 00:09:57.805 cpu : usr=1.98%, sys=6.35%, ctx=402, majf=0, minf=9 00:09:57.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:57.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.805 issued rwts: total=1672,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.805 00:09:57.805 Run status group 0 (all jobs): 00:09:57.805 READ: bw=38.4MiB/s (40.2MB/s), 6297KiB/s-14.0MiB/s (6448kB/s-14.7MB/s), io=38.7MiB (40.6MB), run=1002-1009msec 00:09:57.805 WRITE: bw=45.1MiB/s (47.3MB/s), 8119KiB/s-16.0MiB/s (8314kB/s-16.7MB/s), io=45.5MiB (47.7MB), run=1002-1009msec 00:09:57.805 00:09:57.805 Disk stats (read/write): 00:09:57.805 nvme0n1: ios=3122/3328, merge=0/0, ticks=51413/49031, in_queue=100444, util=87.76% 00:09:57.805 nvme0n2: ios=1491/1536, merge=0/0, ticks=26119/23833, in_queue=49952, util=85.63% 00:09:57.805 nvme0n3: ios=2560/2880, merge=0/0, ticks=49557/50943, in_queue=100500, util=89.12% 00:09:57.805 nvme0n4: ios=1530/1561, merge=0/0, ticks=27474/24453, in_queue=51927, util=88.53% 00:09:57.805 19:47:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:57.805 19:47:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68874 00:09:57.805 19:47:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:57.805 19:47:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:57.805 [global] 00:09:57.805 thread=1 00:09:57.805 invalidate=1 00:09:57.805 rw=read 00:09:57.805 time_based=1 00:09:57.805 runtime=10 00:09:57.805 ioengine=libaio 00:09:57.805 direct=1 00:09:57.805 bs=4096 00:09:57.805 iodepth=1 00:09:57.805 norandommap=1 00:09:57.805 numjobs=1 00:09:57.805 00:09:57.805 [job0] 00:09:57.805 filename=/dev/nvme0n1 00:09:57.805 [job1] 00:09:57.805 filename=/dev/nvme0n2 00:09:57.805 [job2] 00:09:57.805 filename=/dev/nvme0n3 00:09:57.805 [job3] 00:09:57.805 filename=/dev/nvme0n4 00:09:57.805 Could not set queue depth (nvme0n1) 00:09:57.805 Could not set queue depth (nvme0n2) 00:09:57.805 Could not set queue depth (nvme0n3) 00:09:57.805 Could not set queue depth (nvme0n4) 00:09:57.805 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.805 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.805 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.805 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.805 fio-3.35 00:09:57.805 Starting 4 threads 00:10:01.121 19:47:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:01.121 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=45674496, buflen=4096 00:10:01.121 fio: pid=68921, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:01.121 19:47:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:01.379 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=33447936, buflen=4096 00:10:01.379 fio: pid=68920, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:01.379 19:47:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.379 19:47:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:01.637 fio: pid=68915, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:01.637 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=59621376, buflen=4096 00:10:01.637 19:47:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.637 19:47:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:01.895 fio: pid=68917, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:01.896 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=43446272, buflen=4096 00:10:01.896 00:10:01.896 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68915: Mon Jul 15 19:47:55 2024 00:10:01.896 read: IOPS=4184, BW=16.3MiB/s (17.1MB/s)(56.9MiB/3479msec) 00:10:01.896 slat (usec): min=10, max=15820, avg=18.60, stdev=174.73 00:10:01.896 clat (usec): min=137, max=2714, avg=218.76, stdev=48.16 00:10:01.896 lat (usec): min=153, max=16108, avg=237.36, stdev=181.79 00:10:01.896 clat percentiles (usec): 00:10:01.896 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 198], 00:10:01.896 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:10:01.896 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 265], 00:10:01.896 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 578], 99.95th=[ 898], 00:10:01.896 | 99.99th=[ 2376] 00:10:01.896 bw ( KiB/s): min=15520, max=17496, per=34.94%, avg=16560.00, stdev=693.04, samples=6 00:10:01.896 iops : min= 3880, max= 4374, avg=4140.00, stdev=173.26, samples=6 00:10:01.896 lat (usec) : 250=89.30%, 500=10.59%, 750=0.03%, 1000=0.03% 00:10:01.896 lat (msec) : 2=0.03%, 4=0.02% 00:10:01.896 cpu : usr=1.41%, sys=5.81%, ctx=14561, majf=0, minf=1 00:10:01.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.896 issued rwts: total=14557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.896 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68917: Mon Jul 15 19:47:55 2024 00:10:01.896 read: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(41.4MiB/3754msec) 00:10:01.896 slat (usec): min=10, max=12704, avg=26.69, stdev=235.73 00:10:01.896 clat (usec): min=130, max=4966, avg=324.81, stdev=120.42 00:10:01.896 lat (usec): min=144, max=13007, avg=351.51, stdev=263.91 00:10:01.896 clat percentiles (usec): 00:10:01.896 | 1.00th=[ 149], 5.00th=[ 165], 10.00th=[ 178], 20.00th=[ 210], 00:10:01.896 | 30.00th=[ 302], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 363], 00:10:01.896 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 412], 95.00th=[ 433], 00:10:01.896 | 99.00th=[ 498], 99.50th=[ 562], 99.90th=[ 1139], 99.95th=[ 1991], 00:10:01.896 | 99.99th=[ 4490] 00:10:01.896 bw ( KiB/s): min= 9776, max=15084, per=23.01%, avg=10906.86, stdev=1869.28, samples=7 00:10:01.896 iops : min= 2444, max= 3771, avg=2726.71, stdev=467.32, samples=7 00:10:01.896 lat (usec) : 250=23.84%, 500=75.20%, 750=0.74%, 1000=0.08% 00:10:01.896 lat (msec) : 2=0.08%, 4=0.03%, 10=0.02% 00:10:01.896 cpu : usr=1.41%, sys=5.60%, ctx=10619, majf=0, minf=1 00:10:01.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.896 issued rwts: total=10608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.896 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68920: Mon Jul 15 19:47:55 2024 00:10:01.896 read: IOPS=2533, BW=9.89MiB/s (10.4MB/s)(31.9MiB/3224msec) 00:10:01.896 slat (usec): min=10, max=8076, avg=21.93, stdev=116.17 00:10:01.896 clat (usec): min=154, max=7351, avg=370.62, stdev=167.11 00:10:01.896 lat (usec): min=169, max=8372, avg=392.55, stdev=203.71 00:10:01.896 clat percentiles (usec): 00:10:01.896 | 1.00th=[ 208], 5.00th=[ 273], 10.00th=[ 306], 20.00th=[ 330], 00:10:01.896 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 375], 00:10:01.896 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 441], 00:10:01.896 | 99.00th=[ 553], 99.50th=[ 627], 99.90th=[ 2474], 99.95th=[ 4047], 00:10:01.896 | 99.99th=[ 7373] 00:10:01.896 bw ( KiB/s): min= 8984, max=10744, per=21.13%, avg=10016.00, stdev=634.98, samples=6 00:10:01.896 iops : min= 2246, max= 2686, avg=2504.00, stdev=158.75, samples=6 00:10:01.896 lat (usec) : 250=2.66%, 500=95.71%, 750=1.32%, 1000=0.06% 00:10:01.896 lat (msec) : 2=0.11%, 4=0.06%, 10=0.06% 00:10:01.896 cpu : usr=1.21%, sys=4.56%, ctx=8177, majf=0, minf=1 00:10:01.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.896 issued rwts: total=8167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.896 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68921: Mon Jul 15 19:47:55 2024 00:10:01.896 read: IOPS=3749, BW=14.6MiB/s (15.4MB/s)(43.6MiB/2974msec) 00:10:01.896 slat (nsec): min=12777, max=93669, avg=15976.57, stdev=3269.93 00:10:01.896 clat (usec): min=172, max=1573, avg=248.95, stdev=38.81 00:10:01.896 lat (usec): min=188, max=1590, avg=264.93, stdev=39.47 00:10:01.896 clat percentiles (usec): 00:10:01.896 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 221], 00:10:01.896 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:10:01.896 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 322], 00:10:01.896 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 502], 99.95th=[ 537], 00:10:01.896 | 99.99th=[ 627] 00:10:01.896 bw ( KiB/s): min=12552, max=15976, per=31.43%, avg=14897.60, stdev=1359.80, samples=5 00:10:01.896 iops : min= 3138, max= 3994, avg=3724.40, stdev=339.95, samples=5 00:10:01.896 lat (usec) : 250=60.43%, 500=39.46%, 750=0.09% 00:10:01.896 lat (msec) : 2=0.01% 00:10:01.896 cpu : usr=1.11%, sys=5.15%, ctx=11155, majf=0, minf=1 00:10:01.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.896 issued rwts: total=11152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.896 00:10:01.896 Run status group 0 (all jobs): 00:10:01.896 READ: bw=46.3MiB/s (48.5MB/s), 9.89MiB/s-16.3MiB/s (10.4MB/s-17.1MB/s), io=174MiB (182MB), run=2974-3754msec 00:10:01.896 00:10:01.896 Disk stats (read/write): 00:10:01.896 nvme0n1: ios=14049/0, merge=0/0, ticks=3125/0, in_queue=3125, util=95.25% 00:10:01.896 nvme0n2: ios=9967/0, merge=0/0, ticks=3303/0, in_queue=3303, util=95.29% 00:10:01.896 nvme0n3: ios=7835/0, merge=0/0, ticks=2810/0, in_queue=2810, util=95.87% 00:10:01.896 nvme0n4: ios=10749/0, merge=0/0, ticks=2730/0, in_queue=2730, util=96.73% 00:10:01.896 19:47:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.896 19:47:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:02.154 19:47:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.154 19:47:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:02.412 19:47:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.413 19:47:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:02.671 19:47:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.671 19:47:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:03.237 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.238 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:03.238 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:03.238 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68874 00:10:03.238 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:03.238 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.238 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.238 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:03.238 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:03.238 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.496 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:03.496 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.496 nvmf hotplug test: fio failed as expected 00:10:03.496 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:03.496 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:03.496 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:03.496 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.496 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.754 rmmod nvme_tcp 00:10:03.754 rmmod nvme_fabrics 00:10:03.754 rmmod nvme_keyring 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68481 ']' 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68481 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68481 ']' 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68481 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68481 00:10:03.754 killing process with pid 68481 00:10:03.754 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.755 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.755 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68481' 00:10:03.755 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68481 00:10:03.755 19:47:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68481 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:04.013 00:10:04.013 real 0m20.340s 00:10:04.013 user 1m18.447s 00:10:04.013 sys 0m9.210s 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.013 19:47:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.013 ************************************ 00:10:04.013 END TEST nvmf_fio_target 00:10:04.013 ************************************ 00:10:04.013 19:47:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:04.013 19:47:58 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:04.013 19:47:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:04.013 19:47:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.013 19:47:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.013 ************************************ 00:10:04.013 START TEST nvmf_bdevio 00:10:04.013 ************************************ 00:10:04.013 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:04.013 * Looking for test storage... 00:10:04.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:04.272 Cannot find device "nvmf_tgt_br" 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.272 Cannot find device "nvmf_tgt_br2" 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:04.272 Cannot find device "nvmf_tgt_br" 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:04.272 Cannot find device "nvmf_tgt_br2" 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:04.272 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:04.273 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:04.273 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:04.273 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:04.273 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:04.273 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:04.273 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:04.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:04.531 00:10:04.531 --- 10.0.0.2 ping statistics --- 00:10:04.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.531 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:04.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:10:04.531 00:10:04.531 --- 10.0.0.3 ping statistics --- 00:10:04.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.531 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:04.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:04.531 00:10:04.531 --- 10.0.0.1 ping statistics --- 00:10:04.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.531 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69182 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69182 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69182 ']' 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.531 19:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.531 [2024-07-15 19:47:58.696854] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:10:04.531 [2024-07-15 19:47:58.696968] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.790 [2024-07-15 19:47:58.839285] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.790 [2024-07-15 19:47:58.956472] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.790 [2024-07-15 19:47:58.956544] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.790 [2024-07-15 19:47:58.956563] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.790 [2024-07-15 19:47:58.956576] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.790 [2024-07-15 19:47:58.956587] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.790 [2024-07-15 19:47:58.956842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.790 [2024-07-15 19:47:58.957313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:04.790 [2024-07-15 19:47:58.957404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:04.790 [2024-07-15 19:47:58.958021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.790 [2024-07-15 19:47:59.011349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:05.726 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.726 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:05.726 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:05.726 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:05.726 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.726 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.726 19:47:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.727 [2024-07-15 19:47:59.735022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.727 Malloc0 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.727 [2024-07-15 19:47:59.809954] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:05.727 { 00:10:05.727 "params": { 00:10:05.727 "name": "Nvme$subsystem", 00:10:05.727 "trtype": "$TEST_TRANSPORT", 00:10:05.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.727 "adrfam": "ipv4", 00:10:05.727 "trsvcid": "$NVMF_PORT", 00:10:05.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.727 "hdgst": ${hdgst:-false}, 00:10:05.727 "ddgst": ${ddgst:-false} 00:10:05.727 }, 00:10:05.727 "method": "bdev_nvme_attach_controller" 00:10:05.727 } 00:10:05.727 EOF 00:10:05.727 )") 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:05.727 19:47:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:05.727 "params": { 00:10:05.727 "name": "Nvme1", 00:10:05.727 "trtype": "tcp", 00:10:05.727 "traddr": "10.0.0.2", 00:10:05.727 "adrfam": "ipv4", 00:10:05.727 "trsvcid": "4420", 00:10:05.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.727 "hdgst": false, 00:10:05.727 "ddgst": false 00:10:05.727 }, 00:10:05.727 "method": "bdev_nvme_attach_controller" 00:10:05.727 }' 00:10:05.727 [2024-07-15 19:47:59.867979] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:10:05.727 [2024-07-15 19:47:59.868065] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69221 ] 00:10:05.985 [2024-07-15 19:48:00.005958] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:05.985 [2024-07-15 19:48:00.126064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.985 [2024-07-15 19:48:00.126217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.985 [2024-07-15 19:48:00.126225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.985 [2024-07-15 19:48:00.190001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.244 I/O targets: 00:10:06.244 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:06.244 00:10:06.244 00:10:06.244 CUnit - A unit testing framework for C - Version 2.1-3 00:10:06.244 http://cunit.sourceforge.net/ 00:10:06.244 00:10:06.244 00:10:06.244 Suite: bdevio tests on: Nvme1n1 00:10:06.244 Test: blockdev write read block ...passed 00:10:06.244 Test: blockdev write zeroes read block ...passed 00:10:06.244 Test: blockdev write zeroes read no split ...passed 00:10:06.244 Test: blockdev write zeroes read split ...passed 00:10:06.244 Test: blockdev write zeroes read split partial ...passed 00:10:06.244 Test: blockdev reset ...[2024-07-15 19:48:00.344529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:06.244 [2024-07-15 19:48:00.344643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204c730 (9): Bad file descriptor 00:10:06.244 [2024-07-15 19:48:00.356911] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:06.244 passed 00:10:06.244 Test: blockdev write read 8 blocks ...passed 00:10:06.244 Test: blockdev write read size > 128k ...passed 00:10:06.244 Test: blockdev write read invalid size ...passed 00:10:06.244 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.244 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.244 Test: blockdev write read max offset ...passed 00:10:06.244 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.244 Test: blockdev writev readv 8 blocks ...passed 00:10:06.244 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.244 Test: blockdev writev readv block ...passed 00:10:06.244 Test: blockdev writev readv size > 128k ...passed 00:10:06.244 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.244 Test: blockdev comparev and writev ...[2024-07-15 19:48:00.365751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:06.244 [2024-07-15 19:48:00.365880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.365963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:06.244 [2024-07-15 19:48:00.366058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.366548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:06.244 [2024-07-15 19:48:00.366656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.366739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:06.244 [2024-07-15 19:48:00.366811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.367300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:06.244 [2024-07-15 19:48:00.367403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.367478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:06.244 [2024-07-15 19:48:00.367536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.368034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:06.244 [2024-07-15 19:48:00.368134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.368227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:06.244 [2024-07-15 19:48:00.368322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:06.244 passed 00:10:06.244 Test: blockdev nvme passthru rw ...passed 00:10:06.244 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:48:00.369428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:06.244 [2024-07-15 19:48:00.369530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.369776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:06.244 [2024-07-15 19:48:00.369876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.370057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:06.244 [2024-07-15 19:48:00.370163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:06.244 [2024-07-15 19:48:00.370401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:06.244 [2024-07-15 19:48:00.370503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:06.244 passed 00:10:06.244 Test: blockdev nvme admin passthru ...passed 00:10:06.244 Test: blockdev copy ...passed 00:10:06.244 00:10:06.244 Run Summary: Type Total Ran Passed Failed Inactive 00:10:06.244 suites 1 1 n/a 0 0 00:10:06.244 tests 23 23 23 0 0 00:10:06.244 asserts 152 152 152 0 n/a 00:10:06.244 00:10:06.244 Elapsed time = 0.157 seconds 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.502 rmmod nvme_tcp 00:10:06.502 rmmod nvme_fabrics 00:10:06.502 rmmod nvme_keyring 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:06.502 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69182 ']' 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69182 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69182 ']' 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69182 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69182 00:10:06.503 killing process with pid 69182 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69182' 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69182 00:10:06.503 19:48:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69182 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:07.069 00:10:07.069 real 0m2.878s 00:10:07.069 user 0m9.480s 00:10:07.069 sys 0m0.773s 00:10:07.069 19:48:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.069 ************************************ 00:10:07.069 END TEST nvmf_bdevio 00:10:07.069 ************************************ 00:10:07.070 19:48:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.070 19:48:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:07.070 19:48:01 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:07.070 19:48:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:07.070 19:48:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.070 19:48:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.070 ************************************ 00:10:07.070 START TEST nvmf_auth_target 00:10:07.070 ************************************ 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:07.070 * Looking for test storage... 00:10:07.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:07.070 Cannot find device "nvmf_tgt_br" 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.070 Cannot find device "nvmf_tgt_br2" 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:07.070 Cannot find device "nvmf_tgt_br" 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:07.070 Cannot find device "nvmf_tgt_br2" 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:07.070 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:07.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:10:07.328 00:10:07.328 --- 10.0.0.2 ping statistics --- 00:10:07.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.328 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:07.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:07.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:07.328 00:10:07.328 --- 10.0.0.3 ping statistics --- 00:10:07.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.328 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:07.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:07.328 00:10:07.328 --- 10.0.0.1 ping statistics --- 00:10:07.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.328 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:07.328 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69396 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69396 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69396 ']' 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.587 19:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69428 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4e1bcd4987bf67888e5afae6039d253f001edd349e701adb 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.AJJ 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4e1bcd4987bf67888e5afae6039d253f001edd349e701adb 0 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4e1bcd4987bf67888e5afae6039d253f001edd349e701adb 0 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4e1bcd4987bf67888e5afae6039d253f001edd349e701adb 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:08.517 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.AJJ 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.AJJ 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.AJJ 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=09eb93475874c307540ccc4963c99c87a4432a6313200477a2b0035449fa6061 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Fii 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 09eb93475874c307540ccc4963c99c87a4432a6313200477a2b0035449fa6061 3 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 09eb93475874c307540ccc4963c99c87a4432a6313200477a2b0035449fa6061 3 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=09eb93475874c307540ccc4963c99c87a4432a6313200477a2b0035449fa6061 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Fii 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Fii 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Fii 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2063078bf4e7c3f43436a3278dda13df 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.m1C 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2063078bf4e7c3f43436a3278dda13df 1 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2063078bf4e7c3f43436a3278dda13df 1 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2063078bf4e7c3f43436a3278dda13df 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.m1C 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.m1C 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.m1C 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=081d36b73410459853a2cc17282a0f48da9ed816fc29279d 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.h5l 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 081d36b73410459853a2cc17282a0f48da9ed816fc29279d 2 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 081d36b73410459853a2cc17282a0f48da9ed816fc29279d 2 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=081d36b73410459853a2cc17282a0f48da9ed816fc29279d 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.h5l 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.h5l 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.h5l 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=59c5fe316a6387fc40268a0f1ee86ec6e6e1b92d6583d0ba 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Tbq 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 59c5fe316a6387fc40268a0f1ee86ec6e6e1b92d6583d0ba 2 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 59c5fe316a6387fc40268a0f1ee86ec6e6e1b92d6583d0ba 2 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=59c5fe316a6387fc40268a0f1ee86ec6e6e1b92d6583d0ba 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:08.775 19:48:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Tbq 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Tbq 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Tbq 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f2a6d27b19b084d321213388ec35e08e 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.J2i 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f2a6d27b19b084d321213388ec35e08e 1 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f2a6d27b19b084d321213388ec35e08e 1 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f2a6d27b19b084d321213388ec35e08e 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.J2i 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.J2i 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.J2i 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a08e2851e0deb2837843e8f0a4bc31b5b794405fd68b2fd7035c2b1c4d913e7d 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ROp 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a08e2851e0deb2837843e8f0a4bc31b5b794405fd68b2fd7035c2b1c4d913e7d 3 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a08e2851e0deb2837843e8f0a4bc31b5b794405fd68b2fd7035c2b1c4d913e7d 3 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a08e2851e0deb2837843e8f0a4bc31b5b794405fd68b2fd7035c2b1c4d913e7d 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ROp 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ROp 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ROp 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:09.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69396 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69396 ']' 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.033 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:09.290 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.290 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:09.290 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69428 /var/tmp/host.sock 00:10:09.290 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69428 ']' 00:10:09.290 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:09.290 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.290 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:09.290 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.290 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.548 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.548 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:09.548 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:09.548 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.548 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.805 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.805 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:09.805 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AJJ 00:10:09.805 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.805 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.805 19:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.805 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.AJJ 00:10:09.805 19:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.AJJ 00:10:10.063 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Fii ]] 00:10:10.063 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Fii 00:10:10.063 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.063 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.063 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.063 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Fii 00:10:10.063 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Fii 00:10:10.320 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:10.320 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.m1C 00:10:10.320 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.320 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.320 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.320 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.m1C 00:10:10.320 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.m1C 00:10:10.577 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.h5l ]] 00:10:10.577 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h5l 00:10:10.577 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.577 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.577 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.577 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h5l 00:10:10.577 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h5l 00:10:10.852 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:10.852 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Tbq 00:10:10.852 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.852 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.852 19:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.852 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Tbq 00:10:10.852 19:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Tbq 00:10:11.111 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.J2i ]] 00:10:11.111 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J2i 00:10:11.111 19:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.111 19:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.111 19:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.111 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J2i 00:10:11.111 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J2i 00:10:11.401 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:11.402 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ROp 00:10:11.402 19:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.402 19:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.402 19:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.402 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ROp 00:10:11.402 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ROp 00:10:11.674 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:11.674 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:11.674 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:11.674 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:11.674 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:11.674 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.932 19:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.216 00:10:12.216 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:12.216 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:12.216 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:12.475 { 00:10:12.475 "cntlid": 1, 00:10:12.475 "qid": 0, 00:10:12.475 "state": "enabled", 00:10:12.475 "thread": "nvmf_tgt_poll_group_000", 00:10:12.475 "listen_address": { 00:10:12.475 "trtype": "TCP", 00:10:12.475 "adrfam": "IPv4", 00:10:12.475 "traddr": "10.0.0.2", 00:10:12.475 "trsvcid": "4420" 00:10:12.475 }, 00:10:12.475 "peer_address": { 00:10:12.475 "trtype": "TCP", 00:10:12.475 "adrfam": "IPv4", 00:10:12.475 "traddr": "10.0.0.1", 00:10:12.475 "trsvcid": "51930" 00:10:12.475 }, 00:10:12.475 "auth": { 00:10:12.475 "state": "completed", 00:10:12.475 "digest": "sha256", 00:10:12.475 "dhgroup": "null" 00:10:12.475 } 00:10:12.475 } 00:10:12.475 ]' 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.475 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.734 19:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.003 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.004 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.004 00:10:18.004 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:18.004 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:18.004 19:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.004 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.004 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.004 19:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.004 19:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.004 19:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.004 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:18.004 { 00:10:18.004 "cntlid": 3, 00:10:18.004 "qid": 0, 00:10:18.004 "state": "enabled", 00:10:18.004 "thread": "nvmf_tgt_poll_group_000", 00:10:18.004 "listen_address": { 00:10:18.004 "trtype": "TCP", 00:10:18.004 "adrfam": "IPv4", 00:10:18.004 "traddr": "10.0.0.2", 00:10:18.004 "trsvcid": "4420" 00:10:18.004 }, 00:10:18.004 "peer_address": { 00:10:18.004 "trtype": "TCP", 00:10:18.004 "adrfam": "IPv4", 00:10:18.004 "traddr": "10.0.0.1", 00:10:18.004 "trsvcid": "45538" 00:10:18.004 }, 00:10:18.004 "auth": { 00:10:18.004 "state": "completed", 00:10:18.004 "digest": "sha256", 00:10:18.004 "dhgroup": "null" 00:10:18.004 } 00:10:18.004 } 00:10:18.004 ]' 00:10:18.004 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:18.004 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.004 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:18.262 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:18.262 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:18.262 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.262 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.262 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.519 19:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:10:19.084 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.084 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:19.084 19:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.084 19:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.084 19:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.084 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:19.084 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:19.084 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.342 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.600 00:10:19.600 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:19.600 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:19.600 19:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.858 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.858 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.858 19:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.858 19:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.858 19:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.858 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:19.858 { 00:10:19.858 "cntlid": 5, 00:10:19.858 "qid": 0, 00:10:19.858 "state": "enabled", 00:10:19.858 "thread": "nvmf_tgt_poll_group_000", 00:10:19.858 "listen_address": { 00:10:19.858 "trtype": "TCP", 00:10:19.858 "adrfam": "IPv4", 00:10:19.858 "traddr": "10.0.0.2", 00:10:19.858 "trsvcid": "4420" 00:10:19.858 }, 00:10:19.858 "peer_address": { 00:10:19.858 "trtype": "TCP", 00:10:19.858 "adrfam": "IPv4", 00:10:19.858 "traddr": "10.0.0.1", 00:10:19.858 "trsvcid": "45582" 00:10:19.858 }, 00:10:19.858 "auth": { 00:10:19.858 "state": "completed", 00:10:19.858 "digest": "sha256", 00:10:19.858 "dhgroup": "null" 00:10:19.858 } 00:10:19.858 } 00:10:19.858 ]' 00:10:19.858 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:19.858 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.858 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:20.116 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:20.116 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:20.116 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.116 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.116 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.373 19:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:10:20.939 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.939 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:20.939 19:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.939 19:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.939 19:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.939 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:20.939 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:20.939 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:21.198 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:21.457 00:10:21.457 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:21.457 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:21.457 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.715 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.715 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.715 19:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.715 19:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.715 19:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.715 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:21.715 { 00:10:21.715 "cntlid": 7, 00:10:21.715 "qid": 0, 00:10:21.715 "state": "enabled", 00:10:21.715 "thread": "nvmf_tgt_poll_group_000", 00:10:21.715 "listen_address": { 00:10:21.715 "trtype": "TCP", 00:10:21.715 "adrfam": "IPv4", 00:10:21.715 "traddr": "10.0.0.2", 00:10:21.715 "trsvcid": "4420" 00:10:21.715 }, 00:10:21.715 "peer_address": { 00:10:21.715 "trtype": "TCP", 00:10:21.715 "adrfam": "IPv4", 00:10:21.715 "traddr": "10.0.0.1", 00:10:21.715 "trsvcid": "45602" 00:10:21.715 }, 00:10:21.715 "auth": { 00:10:21.715 "state": "completed", 00:10:21.715 "digest": "sha256", 00:10:21.715 "dhgroup": "null" 00:10:21.715 } 00:10:21.715 } 00:10:21.715 ]' 00:10:21.715 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:21.972 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:21.972 19:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:21.972 19:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:21.972 19:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:21.972 19:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.972 19:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.972 19:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.230 19:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:10:23.165 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.166 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.732 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:23.732 { 00:10:23.732 "cntlid": 9, 00:10:23.732 "qid": 0, 00:10:23.732 "state": "enabled", 00:10:23.732 "thread": "nvmf_tgt_poll_group_000", 00:10:23.732 "listen_address": { 00:10:23.732 "trtype": "TCP", 00:10:23.732 "adrfam": "IPv4", 00:10:23.732 "traddr": "10.0.0.2", 00:10:23.732 "trsvcid": "4420" 00:10:23.732 }, 00:10:23.732 "peer_address": { 00:10:23.732 "trtype": "TCP", 00:10:23.732 "adrfam": "IPv4", 00:10:23.732 "traddr": "10.0.0.1", 00:10:23.732 "trsvcid": "45636" 00:10:23.732 }, 00:10:23.732 "auth": { 00:10:23.732 "state": "completed", 00:10:23.732 "digest": "sha256", 00:10:23.732 "dhgroup": "ffdhe2048" 00:10:23.732 } 00:10:23.732 } 00:10:23.732 ]' 00:10:23.732 19:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:23.991 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.991 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:23.991 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:23.991 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:23.991 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.991 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.991 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.249 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:10:24.816 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.816 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:24.816 19:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.816 19:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.816 19:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.816 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:24.816 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:24.816 19:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.074 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.331 00:10:25.331 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:25.331 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.331 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:25.896 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.896 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.896 19:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.896 19:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.896 19:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.896 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:25.896 { 00:10:25.896 "cntlid": 11, 00:10:25.896 "qid": 0, 00:10:25.896 "state": "enabled", 00:10:25.896 "thread": "nvmf_tgt_poll_group_000", 00:10:25.896 "listen_address": { 00:10:25.896 "trtype": "TCP", 00:10:25.896 "adrfam": "IPv4", 00:10:25.896 "traddr": "10.0.0.2", 00:10:25.896 "trsvcid": "4420" 00:10:25.896 }, 00:10:25.896 "peer_address": { 00:10:25.896 "trtype": "TCP", 00:10:25.896 "adrfam": "IPv4", 00:10:25.896 "traddr": "10.0.0.1", 00:10:25.896 "trsvcid": "37742" 00:10:25.896 }, 00:10:25.896 "auth": { 00:10:25.896 "state": "completed", 00:10:25.896 "digest": "sha256", 00:10:25.896 "dhgroup": "ffdhe2048" 00:10:25.896 } 00:10:25.896 } 00:10:25.896 ]' 00:10:25.896 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:25.896 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.896 19:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:25.896 19:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:25.896 19:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:25.896 19:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.896 19:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.896 19:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.154 19:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:10:26.765 19:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.765 19:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:26.765 19:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.765 19:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.765 19:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.765 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:26.765 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:26.765 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.330 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.588 00:10:27.588 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:27.588 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.588 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:27.848 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.848 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.848 19:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.848 19:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.848 19:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.848 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:27.848 { 00:10:27.848 "cntlid": 13, 00:10:27.848 "qid": 0, 00:10:27.848 "state": "enabled", 00:10:27.848 "thread": "nvmf_tgt_poll_group_000", 00:10:27.848 "listen_address": { 00:10:27.848 "trtype": "TCP", 00:10:27.848 "adrfam": "IPv4", 00:10:27.848 "traddr": "10.0.0.2", 00:10:27.848 "trsvcid": "4420" 00:10:27.848 }, 00:10:27.848 "peer_address": { 00:10:27.848 "trtype": "TCP", 00:10:27.848 "adrfam": "IPv4", 00:10:27.848 "traddr": "10.0.0.1", 00:10:27.848 "trsvcid": "37786" 00:10:27.848 }, 00:10:27.848 "auth": { 00:10:27.848 "state": "completed", 00:10:27.848 "digest": "sha256", 00:10:27.848 "dhgroup": "ffdhe2048" 00:10:27.848 } 00:10:27.848 } 00:10:27.848 ]' 00:10:27.848 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:27.848 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.848 19:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:27.848 19:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:27.848 19:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:27.848 19:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.848 19:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.848 19:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.417 19:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:10:28.984 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.984 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:28.984 19:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.984 19:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.984 19:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.984 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:28.984 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:28.984 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:29.243 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:29.500 00:10:29.500 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:29.500 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.500 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:29.758 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.758 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.758 19:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.758 19:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.758 19:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.758 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:29.758 { 00:10:29.758 "cntlid": 15, 00:10:29.758 "qid": 0, 00:10:29.758 "state": "enabled", 00:10:29.758 "thread": "nvmf_tgt_poll_group_000", 00:10:29.758 "listen_address": { 00:10:29.758 "trtype": "TCP", 00:10:29.758 "adrfam": "IPv4", 00:10:29.758 "traddr": "10.0.0.2", 00:10:29.758 "trsvcid": "4420" 00:10:29.758 }, 00:10:29.758 "peer_address": { 00:10:29.758 "trtype": "TCP", 00:10:29.758 "adrfam": "IPv4", 00:10:29.758 "traddr": "10.0.0.1", 00:10:29.758 "trsvcid": "37808" 00:10:29.758 }, 00:10:29.758 "auth": { 00:10:29.758 "state": "completed", 00:10:29.758 "digest": "sha256", 00:10:29.758 "dhgroup": "ffdhe2048" 00:10:29.758 } 00:10:29.758 } 00:10:29.758 ]' 00:10:29.758 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:29.758 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.758 19:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.016 19:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:30.016 19:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.016 19:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.016 19:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.016 19:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.275 19:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:10:30.841 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.841 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:30.841 19:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.841 19:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.100 19:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.100 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:31.100 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.100 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:31.100 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:31.358 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:31.358 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:31.358 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:31.358 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:31.358 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:31.358 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.358 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.359 19:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.359 19:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.359 19:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.359 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.359 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.617 00:10:31.617 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:31.617 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:31.617 19:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.876 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.876 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.876 19:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.876 19:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.876 19:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.876 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:31.876 { 00:10:31.876 "cntlid": 17, 00:10:31.876 "qid": 0, 00:10:31.876 "state": "enabled", 00:10:31.876 "thread": "nvmf_tgt_poll_group_000", 00:10:31.876 "listen_address": { 00:10:31.876 "trtype": "TCP", 00:10:31.876 "adrfam": "IPv4", 00:10:31.876 "traddr": "10.0.0.2", 00:10:31.876 "trsvcid": "4420" 00:10:31.876 }, 00:10:31.876 "peer_address": { 00:10:31.876 "trtype": "TCP", 00:10:31.876 "adrfam": "IPv4", 00:10:31.876 "traddr": "10.0.0.1", 00:10:31.876 "trsvcid": "37828" 00:10:31.876 }, 00:10:31.876 "auth": { 00:10:31.876 "state": "completed", 00:10:31.876 "digest": "sha256", 00:10:31.876 "dhgroup": "ffdhe3072" 00:10:31.876 } 00:10:31.876 } 00:10:31.876 ]' 00:10:31.876 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.135 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.135 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.135 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:32.135 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.135 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.135 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.135 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.394 19:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:10:33.331 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.331 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:33.331 19:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.331 19:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.331 19:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.331 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.331 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:33.331 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.589 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.848 00:10:33.848 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:33.848 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:33.848 19:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.107 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.107 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.107 19:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.107 19:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.107 19:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.107 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:34.107 { 00:10:34.107 "cntlid": 19, 00:10:34.107 "qid": 0, 00:10:34.107 "state": "enabled", 00:10:34.107 "thread": "nvmf_tgt_poll_group_000", 00:10:34.107 "listen_address": { 00:10:34.107 "trtype": "TCP", 00:10:34.107 "adrfam": "IPv4", 00:10:34.107 "traddr": "10.0.0.2", 00:10:34.107 "trsvcid": "4420" 00:10:34.107 }, 00:10:34.107 "peer_address": { 00:10:34.107 "trtype": "TCP", 00:10:34.107 "adrfam": "IPv4", 00:10:34.107 "traddr": "10.0.0.1", 00:10:34.107 "trsvcid": "37842" 00:10:34.107 }, 00:10:34.107 "auth": { 00:10:34.107 "state": "completed", 00:10:34.107 "digest": "sha256", 00:10:34.107 "dhgroup": "ffdhe3072" 00:10:34.107 } 00:10:34.107 } 00:10:34.107 ]' 00:10:34.107 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:34.366 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.366 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:34.366 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:34.366 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:34.366 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.366 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.366 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.624 19:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:10:35.191 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.191 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:35.191 19:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.191 19:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.191 19:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.191 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:35.191 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:35.191 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.450 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.015 00:10:36.015 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.015 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.015 19:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.015 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.015 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.015 19:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.015 19:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.015 19:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.015 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.015 { 00:10:36.015 "cntlid": 21, 00:10:36.015 "qid": 0, 00:10:36.015 "state": "enabled", 00:10:36.015 "thread": "nvmf_tgt_poll_group_000", 00:10:36.015 "listen_address": { 00:10:36.015 "trtype": "TCP", 00:10:36.016 "adrfam": "IPv4", 00:10:36.016 "traddr": "10.0.0.2", 00:10:36.016 "trsvcid": "4420" 00:10:36.016 }, 00:10:36.016 "peer_address": { 00:10:36.016 "trtype": "TCP", 00:10:36.016 "adrfam": "IPv4", 00:10:36.016 "traddr": "10.0.0.1", 00:10:36.016 "trsvcid": "44866" 00:10:36.016 }, 00:10:36.016 "auth": { 00:10:36.016 "state": "completed", 00:10:36.016 "digest": "sha256", 00:10:36.016 "dhgroup": "ffdhe3072" 00:10:36.016 } 00:10:36.016 } 00:10:36.016 ]' 00:10:36.016 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.274 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.274 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:36.274 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:36.274 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:36.274 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.274 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.274 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.533 19:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:37.470 19:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.060 00:10:38.060 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.060 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.060 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.060 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.060 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.060 19:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.060 19:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.319 19:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.319 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.319 { 00:10:38.319 "cntlid": 23, 00:10:38.319 "qid": 0, 00:10:38.319 "state": "enabled", 00:10:38.319 "thread": "nvmf_tgt_poll_group_000", 00:10:38.319 "listen_address": { 00:10:38.319 "trtype": "TCP", 00:10:38.319 "adrfam": "IPv4", 00:10:38.319 "traddr": "10.0.0.2", 00:10:38.319 "trsvcid": "4420" 00:10:38.319 }, 00:10:38.319 "peer_address": { 00:10:38.319 "trtype": "TCP", 00:10:38.319 "adrfam": "IPv4", 00:10:38.319 "traddr": "10.0.0.1", 00:10:38.319 "trsvcid": "44894" 00:10:38.319 }, 00:10:38.319 "auth": { 00:10:38.319 "state": "completed", 00:10:38.319 "digest": "sha256", 00:10:38.319 "dhgroup": "ffdhe3072" 00:10:38.319 } 00:10:38.319 } 00:10:38.319 ]' 00:10:38.319 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:38.320 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.320 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:38.320 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:38.320 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:38.320 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.320 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.320 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.578 19:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:10:39.146 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.146 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:39.146 19:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.146 19:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.146 19:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.146 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.146 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.146 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:39.146 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:39.712 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:39.712 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:39.712 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:39.712 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:39.712 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:39.712 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.712 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.712 19:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.712 19:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.713 19:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.713 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.713 19:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.971 00:10:39.971 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.971 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.971 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.231 { 00:10:40.231 "cntlid": 25, 00:10:40.231 "qid": 0, 00:10:40.231 "state": "enabled", 00:10:40.231 "thread": "nvmf_tgt_poll_group_000", 00:10:40.231 "listen_address": { 00:10:40.231 "trtype": "TCP", 00:10:40.231 "adrfam": "IPv4", 00:10:40.231 "traddr": "10.0.0.2", 00:10:40.231 "trsvcid": "4420" 00:10:40.231 }, 00:10:40.231 "peer_address": { 00:10:40.231 "trtype": "TCP", 00:10:40.231 "adrfam": "IPv4", 00:10:40.231 "traddr": "10.0.0.1", 00:10:40.231 "trsvcid": "44924" 00:10:40.231 }, 00:10:40.231 "auth": { 00:10:40.231 "state": "completed", 00:10:40.231 "digest": "sha256", 00:10:40.231 "dhgroup": "ffdhe4096" 00:10:40.231 } 00:10:40.231 } 00:10:40.231 ]' 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.231 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.489 19:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:10:41.473 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.473 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:41.473 19:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.473 19:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.473 19:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.473 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:41.473 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:41.473 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.730 19:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.988 00:10:41.988 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.988 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.988 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.247 { 00:10:42.247 "cntlid": 27, 00:10:42.247 "qid": 0, 00:10:42.247 "state": "enabled", 00:10:42.247 "thread": "nvmf_tgt_poll_group_000", 00:10:42.247 "listen_address": { 00:10:42.247 "trtype": "TCP", 00:10:42.247 "adrfam": "IPv4", 00:10:42.247 "traddr": "10.0.0.2", 00:10:42.247 "trsvcid": "4420" 00:10:42.247 }, 00:10:42.247 "peer_address": { 00:10:42.247 "trtype": "TCP", 00:10:42.247 "adrfam": "IPv4", 00:10:42.247 "traddr": "10.0.0.1", 00:10:42.247 "trsvcid": "44952" 00:10:42.247 }, 00:10:42.247 "auth": { 00:10:42.247 "state": "completed", 00:10:42.247 "digest": "sha256", 00:10:42.247 "dhgroup": "ffdhe4096" 00:10:42.247 } 00:10:42.247 } 00:10:42.247 ]' 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:42.247 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.505 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.505 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.505 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.763 19:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:10:43.329 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.329 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:43.329 19:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.329 19:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.329 19:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.329 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.329 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:43.329 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.587 19:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.153 00:10:44.153 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.153 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.153 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.412 { 00:10:44.412 "cntlid": 29, 00:10:44.412 "qid": 0, 00:10:44.412 "state": "enabled", 00:10:44.412 "thread": "nvmf_tgt_poll_group_000", 00:10:44.412 "listen_address": { 00:10:44.412 "trtype": "TCP", 00:10:44.412 "adrfam": "IPv4", 00:10:44.412 "traddr": "10.0.0.2", 00:10:44.412 "trsvcid": "4420" 00:10:44.412 }, 00:10:44.412 "peer_address": { 00:10:44.412 "trtype": "TCP", 00:10:44.412 "adrfam": "IPv4", 00:10:44.412 "traddr": "10.0.0.1", 00:10:44.412 "trsvcid": "44974" 00:10:44.412 }, 00:10:44.412 "auth": { 00:10:44.412 "state": "completed", 00:10:44.412 "digest": "sha256", 00:10:44.412 "dhgroup": "ffdhe4096" 00:10:44.412 } 00:10:44.412 } 00:10:44.412 ]' 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:44.412 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:44.670 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.670 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.670 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.928 19:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:10:45.494 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.494 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:45.494 19:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.494 19:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.494 19:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.494 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.494 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:45.494 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:45.752 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:45.752 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.752 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:45.752 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:45.752 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:45.752 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.752 19:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:10:45.752 19:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.752 19:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.011 19:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.011 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:46.011 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:46.270 00:10:46.270 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.270 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.270 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.529 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.529 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.529 19:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.529 19:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.529 19:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.529 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.529 { 00:10:46.529 "cntlid": 31, 00:10:46.529 "qid": 0, 00:10:46.529 "state": "enabled", 00:10:46.529 "thread": "nvmf_tgt_poll_group_000", 00:10:46.529 "listen_address": { 00:10:46.529 "trtype": "TCP", 00:10:46.529 "adrfam": "IPv4", 00:10:46.529 "traddr": "10.0.0.2", 00:10:46.529 "trsvcid": "4420" 00:10:46.529 }, 00:10:46.529 "peer_address": { 00:10:46.529 "trtype": "TCP", 00:10:46.529 "adrfam": "IPv4", 00:10:46.529 "traddr": "10.0.0.1", 00:10:46.529 "trsvcid": "54788" 00:10:46.529 }, 00:10:46.529 "auth": { 00:10:46.529 "state": "completed", 00:10:46.529 "digest": "sha256", 00:10:46.529 "dhgroup": "ffdhe4096" 00:10:46.529 } 00:10:46.529 } 00:10:46.529 ]' 00:10:46.529 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.529 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.529 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.788 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:46.788 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.788 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.788 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.788 19:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.048 19:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:10:47.984 19:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.984 19:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:47.984 19:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.984 19:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 19:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.984 19:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:47.984 19:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.984 19:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:47.984 19:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.984 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.549 00:10:48.549 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:48.549 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.549 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.808 { 00:10:48.808 "cntlid": 33, 00:10:48.808 "qid": 0, 00:10:48.808 "state": "enabled", 00:10:48.808 "thread": "nvmf_tgt_poll_group_000", 00:10:48.808 "listen_address": { 00:10:48.808 "trtype": "TCP", 00:10:48.808 "adrfam": "IPv4", 00:10:48.808 "traddr": "10.0.0.2", 00:10:48.808 "trsvcid": "4420" 00:10:48.808 }, 00:10:48.808 "peer_address": { 00:10:48.808 "trtype": "TCP", 00:10:48.808 "adrfam": "IPv4", 00:10:48.808 "traddr": "10.0.0.1", 00:10:48.808 "trsvcid": "54830" 00:10:48.808 }, 00:10:48.808 "auth": { 00:10:48.808 "state": "completed", 00:10:48.808 "digest": "sha256", 00:10:48.808 "dhgroup": "ffdhe6144" 00:10:48.808 } 00:10:48.808 } 00:10:48.808 ]' 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:48.808 19:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.808 19:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.808 19:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.808 19:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.067 19:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:10:50.018 19:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.018 19:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:50.018 19:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.018 19:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.018 19:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.018 19:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.018 19:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:50.018 19:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.276 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.534 00:10:50.534 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.534 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.534 19:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:50.792 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.792 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.792 19:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.792 19:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.792 19:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.792 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.792 { 00:10:50.792 "cntlid": 35, 00:10:50.792 "qid": 0, 00:10:50.792 "state": "enabled", 00:10:50.792 "thread": "nvmf_tgt_poll_group_000", 00:10:50.792 "listen_address": { 00:10:50.792 "trtype": "TCP", 00:10:50.792 "adrfam": "IPv4", 00:10:50.793 "traddr": "10.0.0.2", 00:10:50.793 "trsvcid": "4420" 00:10:50.793 }, 00:10:50.793 "peer_address": { 00:10:50.793 "trtype": "TCP", 00:10:50.793 "adrfam": "IPv4", 00:10:50.793 "traddr": "10.0.0.1", 00:10:50.793 "trsvcid": "54870" 00:10:50.793 }, 00:10:50.793 "auth": { 00:10:50.793 "state": "completed", 00:10:50.793 "digest": "sha256", 00:10:50.793 "dhgroup": "ffdhe6144" 00:10:50.793 } 00:10:50.793 } 00:10:50.793 ]' 00:10:50.793 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.051 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.051 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.051 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:51.051 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.051 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.051 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.051 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.309 19:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.246 19:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.505 19:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.505 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.505 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.763 00:10:52.763 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.763 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.763 19:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.023 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.023 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.023 19:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.023 19:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.023 19:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.023 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.023 { 00:10:53.023 "cntlid": 37, 00:10:53.023 "qid": 0, 00:10:53.023 "state": "enabled", 00:10:53.023 "thread": "nvmf_tgt_poll_group_000", 00:10:53.023 "listen_address": { 00:10:53.023 "trtype": "TCP", 00:10:53.023 "adrfam": "IPv4", 00:10:53.023 "traddr": "10.0.0.2", 00:10:53.023 "trsvcid": "4420" 00:10:53.023 }, 00:10:53.023 "peer_address": { 00:10:53.023 "trtype": "TCP", 00:10:53.023 "adrfam": "IPv4", 00:10:53.023 "traddr": "10.0.0.1", 00:10:53.023 "trsvcid": "54892" 00:10:53.023 }, 00:10:53.023 "auth": { 00:10:53.023 "state": "completed", 00:10:53.023 "digest": "sha256", 00:10:53.023 "dhgroup": "ffdhe6144" 00:10:53.023 } 00:10:53.023 } 00:10:53.023 ]' 00:10:53.023 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:53.023 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.023 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.282 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:53.282 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:53.282 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.282 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.282 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.540 19:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:10:54.107 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.107 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:54.107 19:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.107 19:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.107 19:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.107 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.107 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:54.107 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.367 19:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.936 00:10:54.936 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.936 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.936 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.195 { 00:10:55.195 "cntlid": 39, 00:10:55.195 "qid": 0, 00:10:55.195 "state": "enabled", 00:10:55.195 "thread": "nvmf_tgt_poll_group_000", 00:10:55.195 "listen_address": { 00:10:55.195 "trtype": "TCP", 00:10:55.195 "adrfam": "IPv4", 00:10:55.195 "traddr": "10.0.0.2", 00:10:55.195 "trsvcid": "4420" 00:10:55.195 }, 00:10:55.195 "peer_address": { 00:10:55.195 "trtype": "TCP", 00:10:55.195 "adrfam": "IPv4", 00:10:55.195 "traddr": "10.0.0.1", 00:10:55.195 "trsvcid": "54926" 00:10:55.195 }, 00:10:55.195 "auth": { 00:10:55.195 "state": "completed", 00:10:55.195 "digest": "sha256", 00:10:55.195 "dhgroup": "ffdhe6144" 00:10:55.195 } 00:10:55.195 } 00:10:55.195 ]' 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:55.195 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.453 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.453 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.453 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.710 19:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:10:56.275 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.275 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:56.275 19:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.275 19:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.275 19:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.275 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:56.275 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.275 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:56.275 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.533 19:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.101 00:10:57.101 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.101 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.101 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.360 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.360 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.360 19:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.360 19:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.360 19:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.360 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.360 { 00:10:57.360 "cntlid": 41, 00:10:57.360 "qid": 0, 00:10:57.360 "state": "enabled", 00:10:57.360 "thread": "nvmf_tgt_poll_group_000", 00:10:57.360 "listen_address": { 00:10:57.360 "trtype": "TCP", 00:10:57.360 "adrfam": "IPv4", 00:10:57.360 "traddr": "10.0.0.2", 00:10:57.360 "trsvcid": "4420" 00:10:57.360 }, 00:10:57.360 "peer_address": { 00:10:57.360 "trtype": "TCP", 00:10:57.360 "adrfam": "IPv4", 00:10:57.360 "traddr": "10.0.0.1", 00:10:57.360 "trsvcid": "45666" 00:10:57.360 }, 00:10:57.360 "auth": { 00:10:57.360 "state": "completed", 00:10:57.360 "digest": "sha256", 00:10:57.360 "dhgroup": "ffdhe8192" 00:10:57.360 } 00:10:57.360 } 00:10:57.360 ]' 00:10:57.360 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.618 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.618 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.618 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:57.618 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.618 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.618 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.618 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.877 19:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:10:58.444 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.444 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:10:58.444 19:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.444 19:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.703 19:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.703 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.703 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:58.703 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.005 19:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.581 00:10:59.581 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.582 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.582 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.840 { 00:10:59.840 "cntlid": 43, 00:10:59.840 "qid": 0, 00:10:59.840 "state": "enabled", 00:10:59.840 "thread": "nvmf_tgt_poll_group_000", 00:10:59.840 "listen_address": { 00:10:59.840 "trtype": "TCP", 00:10:59.840 "adrfam": "IPv4", 00:10:59.840 "traddr": "10.0.0.2", 00:10:59.840 "trsvcid": "4420" 00:10:59.840 }, 00:10:59.840 "peer_address": { 00:10:59.840 "trtype": "TCP", 00:10:59.840 "adrfam": "IPv4", 00:10:59.840 "traddr": "10.0.0.1", 00:10:59.840 "trsvcid": "45692" 00:10:59.840 }, 00:10:59.840 "auth": { 00:10:59.840 "state": "completed", 00:10:59.840 "digest": "sha256", 00:10:59.840 "dhgroup": "ffdhe8192" 00:10:59.840 } 00:10:59.840 } 00:10:59.840 ]' 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.840 19:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.098 19:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:11:01.032 19:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.032 19:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:01.032 19:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.032 19:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.032 19:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.032 19:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.032 19:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:01.032 19:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.291 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.859 00:11:01.859 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.859 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.859 19:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.119 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.119 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.119 19:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.119 19:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.119 19:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.119 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.119 { 00:11:02.119 "cntlid": 45, 00:11:02.119 "qid": 0, 00:11:02.119 "state": "enabled", 00:11:02.119 "thread": "nvmf_tgt_poll_group_000", 00:11:02.119 "listen_address": { 00:11:02.119 "trtype": "TCP", 00:11:02.119 "adrfam": "IPv4", 00:11:02.119 "traddr": "10.0.0.2", 00:11:02.119 "trsvcid": "4420" 00:11:02.119 }, 00:11:02.119 "peer_address": { 00:11:02.119 "trtype": "TCP", 00:11:02.119 "adrfam": "IPv4", 00:11:02.119 "traddr": "10.0.0.1", 00:11:02.119 "trsvcid": "45708" 00:11:02.119 }, 00:11:02.119 "auth": { 00:11:02.119 "state": "completed", 00:11:02.119 "digest": "sha256", 00:11:02.120 "dhgroup": "ffdhe8192" 00:11:02.120 } 00:11:02.120 } 00:11:02.120 ]' 00:11:02.120 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.120 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.120 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.120 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:02.120 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.378 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.378 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.378 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.638 19:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:11:03.204 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.204 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:03.204 19:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.205 19:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.205 19:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.205 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.205 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:03.205 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:03.463 19:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:04.400 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.400 { 00:11:04.400 "cntlid": 47, 00:11:04.400 "qid": 0, 00:11:04.400 "state": "enabled", 00:11:04.400 "thread": "nvmf_tgt_poll_group_000", 00:11:04.400 "listen_address": { 00:11:04.400 "trtype": "TCP", 00:11:04.400 "adrfam": "IPv4", 00:11:04.400 "traddr": "10.0.0.2", 00:11:04.400 "trsvcid": "4420" 00:11:04.400 }, 00:11:04.400 "peer_address": { 00:11:04.400 "trtype": "TCP", 00:11:04.400 "adrfam": "IPv4", 00:11:04.400 "traddr": "10.0.0.1", 00:11:04.400 "trsvcid": "45738" 00:11:04.400 }, 00:11:04.400 "auth": { 00:11:04.400 "state": "completed", 00:11:04.400 "digest": "sha256", 00:11:04.400 "dhgroup": "ffdhe8192" 00:11:04.400 } 00:11:04.400 } 00:11:04.400 ]' 00:11:04.400 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.658 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.658 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.658 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:04.658 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.658 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.658 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.658 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.918 19:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.487 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.746 19:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.004 00:11:06.264 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.264 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.264 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.523 { 00:11:06.523 "cntlid": 49, 00:11:06.523 "qid": 0, 00:11:06.523 "state": "enabled", 00:11:06.523 "thread": "nvmf_tgt_poll_group_000", 00:11:06.523 "listen_address": { 00:11:06.523 "trtype": "TCP", 00:11:06.523 "adrfam": "IPv4", 00:11:06.523 "traddr": "10.0.0.2", 00:11:06.523 "trsvcid": "4420" 00:11:06.523 }, 00:11:06.523 "peer_address": { 00:11:06.523 "trtype": "TCP", 00:11:06.523 "adrfam": "IPv4", 00:11:06.523 "traddr": "10.0.0.1", 00:11:06.523 "trsvcid": "34266" 00:11:06.523 }, 00:11:06.523 "auth": { 00:11:06.523 "state": "completed", 00:11:06.523 "digest": "sha384", 00:11:06.523 "dhgroup": "null" 00:11:06.523 } 00:11:06.523 } 00:11:06.523 ]' 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.523 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.524 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.524 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.783 19:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.721 19:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.289 00:11:08.289 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.289 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.289 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.582 { 00:11:08.582 "cntlid": 51, 00:11:08.582 "qid": 0, 00:11:08.582 "state": "enabled", 00:11:08.582 "thread": "nvmf_tgt_poll_group_000", 00:11:08.582 "listen_address": { 00:11:08.582 "trtype": "TCP", 00:11:08.582 "adrfam": "IPv4", 00:11:08.582 "traddr": "10.0.0.2", 00:11:08.582 "trsvcid": "4420" 00:11:08.582 }, 00:11:08.582 "peer_address": { 00:11:08.582 "trtype": "TCP", 00:11:08.582 "adrfam": "IPv4", 00:11:08.582 "traddr": "10.0.0.1", 00:11:08.582 "trsvcid": "34296" 00:11:08.582 }, 00:11:08.582 "auth": { 00:11:08.582 "state": "completed", 00:11:08.582 "digest": "sha384", 00:11:08.582 "dhgroup": "null" 00:11:08.582 } 00:11:08.582 } 00:11:08.582 ]' 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.582 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.845 19:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:11:09.417 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.676 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:09.676 19:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.676 19:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.676 19:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.676 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.676 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:09.676 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.934 19:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.193 00:11:10.193 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.193 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.193 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.451 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.451 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.451 19:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.451 19:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.451 19:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.451 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.451 { 00:11:10.451 "cntlid": 53, 00:11:10.451 "qid": 0, 00:11:10.451 "state": "enabled", 00:11:10.451 "thread": "nvmf_tgt_poll_group_000", 00:11:10.451 "listen_address": { 00:11:10.451 "trtype": "TCP", 00:11:10.451 "adrfam": "IPv4", 00:11:10.451 "traddr": "10.0.0.2", 00:11:10.451 "trsvcid": "4420" 00:11:10.451 }, 00:11:10.451 "peer_address": { 00:11:10.451 "trtype": "TCP", 00:11:10.451 "adrfam": "IPv4", 00:11:10.451 "traddr": "10.0.0.1", 00:11:10.451 "trsvcid": "34342" 00:11:10.451 }, 00:11:10.451 "auth": { 00:11:10.451 "state": "completed", 00:11:10.451 "digest": "sha384", 00:11:10.451 "dhgroup": "null" 00:11:10.451 } 00:11:10.451 } 00:11:10.451 ]' 00:11:10.451 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.710 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.710 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.710 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:10.710 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.710 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.710 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.710 19:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.968 19:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:11:11.536 19:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.536 19:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:11.536 19:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.536 19:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.536 19:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.536 19:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.536 19:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:11.536 19:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:11.794 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:11.794 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.794 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:11.794 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:11.794 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:11.794 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.794 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:11:11.794 19:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.794 19:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.053 19:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.053 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:12.053 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:12.311 00:11:12.311 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.311 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.311 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.569 { 00:11:12.569 "cntlid": 55, 00:11:12.569 "qid": 0, 00:11:12.569 "state": "enabled", 00:11:12.569 "thread": "nvmf_tgt_poll_group_000", 00:11:12.569 "listen_address": { 00:11:12.569 "trtype": "TCP", 00:11:12.569 "adrfam": "IPv4", 00:11:12.569 "traddr": "10.0.0.2", 00:11:12.569 "trsvcid": "4420" 00:11:12.569 }, 00:11:12.569 "peer_address": { 00:11:12.569 "trtype": "TCP", 00:11:12.569 "adrfam": "IPv4", 00:11:12.569 "traddr": "10.0.0.1", 00:11:12.569 "trsvcid": "34366" 00:11:12.569 }, 00:11:12.569 "auth": { 00:11:12.569 "state": "completed", 00:11:12.569 "digest": "sha384", 00:11:12.569 "dhgroup": "null" 00:11:12.569 } 00:11:12.569 } 00:11:12.569 ]' 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.569 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:12.827 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.827 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.827 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.827 19:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.085 19:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:11:13.659 19:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.659 19:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:13.659 19:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.659 19:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.659 19:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.659 19:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.659 19:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.659 19:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:13.659 19:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.918 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.176 00:11:14.176 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.176 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.176 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.435 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.435 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.435 19:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.435 19:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.435 19:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.435 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.435 { 00:11:14.435 "cntlid": 57, 00:11:14.435 "qid": 0, 00:11:14.435 "state": "enabled", 00:11:14.435 "thread": "nvmf_tgt_poll_group_000", 00:11:14.435 "listen_address": { 00:11:14.435 "trtype": "TCP", 00:11:14.435 "adrfam": "IPv4", 00:11:14.435 "traddr": "10.0.0.2", 00:11:14.435 "trsvcid": "4420" 00:11:14.435 }, 00:11:14.435 "peer_address": { 00:11:14.435 "trtype": "TCP", 00:11:14.435 "adrfam": "IPv4", 00:11:14.435 "traddr": "10.0.0.1", 00:11:14.435 "trsvcid": "34394" 00:11:14.435 }, 00:11:14.435 "auth": { 00:11:14.435 "state": "completed", 00:11:14.435 "digest": "sha384", 00:11:14.435 "dhgroup": "ffdhe2048" 00:11:14.435 } 00:11:14.435 } 00:11:14.435 ]' 00:11:14.435 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.435 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.435 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.694 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:14.694 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.694 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.694 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.694 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.952 19:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:11:15.520 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.520 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:15.520 19:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.520 19:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.520 19:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.520 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.520 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:15.520 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.779 19:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.044 00:11:16.045 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.045 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.045 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.305 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.305 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.305 19:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.305 19:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.305 19:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.305 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.305 { 00:11:16.305 "cntlid": 59, 00:11:16.305 "qid": 0, 00:11:16.305 "state": "enabled", 00:11:16.305 "thread": "nvmf_tgt_poll_group_000", 00:11:16.305 "listen_address": { 00:11:16.305 "trtype": "TCP", 00:11:16.305 "adrfam": "IPv4", 00:11:16.305 "traddr": "10.0.0.2", 00:11:16.305 "trsvcid": "4420" 00:11:16.305 }, 00:11:16.305 "peer_address": { 00:11:16.305 "trtype": "TCP", 00:11:16.305 "adrfam": "IPv4", 00:11:16.305 "traddr": "10.0.0.1", 00:11:16.305 "trsvcid": "43194" 00:11:16.305 }, 00:11:16.305 "auth": { 00:11:16.305 "state": "completed", 00:11:16.305 "digest": "sha384", 00:11:16.305 "dhgroup": "ffdhe2048" 00:11:16.305 } 00:11:16.305 } 00:11:16.305 ]' 00:11:16.305 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.564 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.564 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.564 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:16.564 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.564 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.564 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.564 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.822 19:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:11:17.389 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.389 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:17.389 19:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.389 19:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.389 19:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.389 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.389 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:17.389 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.016 19:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.016 00:11:18.016 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.016 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.016 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:18.275 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.275 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.275 19:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.275 19:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.534 { 00:11:18.534 "cntlid": 61, 00:11:18.534 "qid": 0, 00:11:18.534 "state": "enabled", 00:11:18.534 "thread": "nvmf_tgt_poll_group_000", 00:11:18.534 "listen_address": { 00:11:18.534 "trtype": "TCP", 00:11:18.534 "adrfam": "IPv4", 00:11:18.534 "traddr": "10.0.0.2", 00:11:18.534 "trsvcid": "4420" 00:11:18.534 }, 00:11:18.534 "peer_address": { 00:11:18.534 "trtype": "TCP", 00:11:18.534 "adrfam": "IPv4", 00:11:18.534 "traddr": "10.0.0.1", 00:11:18.534 "trsvcid": "43224" 00:11:18.534 }, 00:11:18.534 "auth": { 00:11:18.534 "state": "completed", 00:11:18.534 "digest": "sha384", 00:11:18.534 "dhgroup": "ffdhe2048" 00:11:18.534 } 00:11:18.534 } 00:11:18.534 ]' 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.534 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.793 19:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:11:19.729 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.729 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:19.729 19:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.729 19:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.729 19:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.729 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.729 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:19.729 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:19.988 19:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:20.246 00:11:20.246 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.246 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.246 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.507 { 00:11:20.507 "cntlid": 63, 00:11:20.507 "qid": 0, 00:11:20.507 "state": "enabled", 00:11:20.507 "thread": "nvmf_tgt_poll_group_000", 00:11:20.507 "listen_address": { 00:11:20.507 "trtype": "TCP", 00:11:20.507 "adrfam": "IPv4", 00:11:20.507 "traddr": "10.0.0.2", 00:11:20.507 "trsvcid": "4420" 00:11:20.507 }, 00:11:20.507 "peer_address": { 00:11:20.507 "trtype": "TCP", 00:11:20.507 "adrfam": "IPv4", 00:11:20.507 "traddr": "10.0.0.1", 00:11:20.507 "trsvcid": "43250" 00:11:20.507 }, 00:11:20.507 "auth": { 00:11:20.507 "state": "completed", 00:11:20.507 "digest": "sha384", 00:11:20.507 "dhgroup": "ffdhe2048" 00:11:20.507 } 00:11:20.507 } 00:11:20.507 ]' 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:20.507 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.765 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.765 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.765 19:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.023 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:11:21.589 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.589 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:21.589 19:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.589 19:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.589 19:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.589 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:21.589 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.589 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:21.589 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.846 19:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.104 00:11:22.104 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.104 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.104 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.363 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.363 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.363 19:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.363 19:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.363 19:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.363 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.363 { 00:11:22.363 "cntlid": 65, 00:11:22.363 "qid": 0, 00:11:22.363 "state": "enabled", 00:11:22.363 "thread": "nvmf_tgt_poll_group_000", 00:11:22.363 "listen_address": { 00:11:22.363 "trtype": "TCP", 00:11:22.363 "adrfam": "IPv4", 00:11:22.363 "traddr": "10.0.0.2", 00:11:22.363 "trsvcid": "4420" 00:11:22.363 }, 00:11:22.363 "peer_address": { 00:11:22.363 "trtype": "TCP", 00:11:22.363 "adrfam": "IPv4", 00:11:22.363 "traddr": "10.0.0.1", 00:11:22.363 "trsvcid": "43282" 00:11:22.363 }, 00:11:22.363 "auth": { 00:11:22.363 "state": "completed", 00:11:22.363 "digest": "sha384", 00:11:22.363 "dhgroup": "ffdhe3072" 00:11:22.363 } 00:11:22.363 } 00:11:22.363 ]' 00:11:22.363 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.363 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.363 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.626 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:22.626 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.626 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.626 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.626 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.892 19:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:11:23.460 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.460 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:23.460 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.460 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.460 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.460 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.460 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:23.460 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.719 19:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.287 00:11:24.287 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.287 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.287 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.287 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.287 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.287 19:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.287 19:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.545 { 00:11:24.545 "cntlid": 67, 00:11:24.545 "qid": 0, 00:11:24.545 "state": "enabled", 00:11:24.545 "thread": "nvmf_tgt_poll_group_000", 00:11:24.545 "listen_address": { 00:11:24.545 "trtype": "TCP", 00:11:24.545 "adrfam": "IPv4", 00:11:24.545 "traddr": "10.0.0.2", 00:11:24.545 "trsvcid": "4420" 00:11:24.545 }, 00:11:24.545 "peer_address": { 00:11:24.545 "trtype": "TCP", 00:11:24.545 "adrfam": "IPv4", 00:11:24.545 "traddr": "10.0.0.1", 00:11:24.545 "trsvcid": "43318" 00:11:24.545 }, 00:11:24.545 "auth": { 00:11:24.545 "state": "completed", 00:11:24.545 "digest": "sha384", 00:11:24.545 "dhgroup": "ffdhe3072" 00:11:24.545 } 00:11:24.545 } 00:11:24.545 ]' 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.545 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.803 19:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.738 19:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.304 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.304 { 00:11:26.304 "cntlid": 69, 00:11:26.304 "qid": 0, 00:11:26.304 "state": "enabled", 00:11:26.304 "thread": "nvmf_tgt_poll_group_000", 00:11:26.304 "listen_address": { 00:11:26.304 "trtype": "TCP", 00:11:26.304 "adrfam": "IPv4", 00:11:26.304 "traddr": "10.0.0.2", 00:11:26.304 "trsvcid": "4420" 00:11:26.304 }, 00:11:26.304 "peer_address": { 00:11:26.304 "trtype": "TCP", 00:11:26.304 "adrfam": "IPv4", 00:11:26.304 "traddr": "10.0.0.1", 00:11:26.304 "trsvcid": "55674" 00:11:26.304 }, 00:11:26.304 "auth": { 00:11:26.304 "state": "completed", 00:11:26.304 "digest": "sha384", 00:11:26.304 "dhgroup": "ffdhe3072" 00:11:26.304 } 00:11:26.304 } 00:11:26.304 ]' 00:11:26.304 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.562 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.562 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.562 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:26.562 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.562 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.562 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.562 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.821 19:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.758 19:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:28.324 00:11:28.324 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.324 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.324 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.324 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.324 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.324 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.324 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.325 19:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.325 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.325 { 00:11:28.325 "cntlid": 71, 00:11:28.325 "qid": 0, 00:11:28.325 "state": "enabled", 00:11:28.325 "thread": "nvmf_tgt_poll_group_000", 00:11:28.325 "listen_address": { 00:11:28.325 "trtype": "TCP", 00:11:28.325 "adrfam": "IPv4", 00:11:28.325 "traddr": "10.0.0.2", 00:11:28.325 "trsvcid": "4420" 00:11:28.325 }, 00:11:28.325 "peer_address": { 00:11:28.325 "trtype": "TCP", 00:11:28.325 "adrfam": "IPv4", 00:11:28.325 "traddr": "10.0.0.1", 00:11:28.325 "trsvcid": "55700" 00:11:28.325 }, 00:11:28.325 "auth": { 00:11:28.325 "state": "completed", 00:11:28.325 "digest": "sha384", 00:11:28.325 "dhgroup": "ffdhe3072" 00:11:28.325 } 00:11:28.325 } 00:11:28.325 ]' 00:11:28.325 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.583 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.583 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.583 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:28.583 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.583 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.583 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.583 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.840 19:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:11:29.406 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.406 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:29.406 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.406 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.406 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.406 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:29.406 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.406 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:29.406 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:29.664 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:29.664 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.664 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:29.664 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:29.664 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:29.664 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.665 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.665 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.665 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.665 19:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.665 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.665 19:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.231 00:11:30.231 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.231 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.231 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.231 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.231 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.231 19:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.231 19:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.231 19:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.231 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.231 { 00:11:30.231 "cntlid": 73, 00:11:30.231 "qid": 0, 00:11:30.231 "state": "enabled", 00:11:30.231 "thread": "nvmf_tgt_poll_group_000", 00:11:30.231 "listen_address": { 00:11:30.231 "trtype": "TCP", 00:11:30.231 "adrfam": "IPv4", 00:11:30.231 "traddr": "10.0.0.2", 00:11:30.231 "trsvcid": "4420" 00:11:30.231 }, 00:11:30.231 "peer_address": { 00:11:30.231 "trtype": "TCP", 00:11:30.231 "adrfam": "IPv4", 00:11:30.231 "traddr": "10.0.0.1", 00:11:30.231 "trsvcid": "55720" 00:11:30.231 }, 00:11:30.231 "auth": { 00:11:30.231 "state": "completed", 00:11:30.231 "digest": "sha384", 00:11:30.231 "dhgroup": "ffdhe4096" 00:11:30.231 } 00:11:30.231 } 00:11:30.232 ]' 00:11:30.232 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.566 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.566 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.566 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:30.566 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.566 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.566 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.566 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.823 19:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:11:31.390 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.390 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:31.390 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.390 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.390 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.390 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.390 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:31.390 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.648 19:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.215 00:11:32.215 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.215 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.215 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.474 { 00:11:32.474 "cntlid": 75, 00:11:32.474 "qid": 0, 00:11:32.474 "state": "enabled", 00:11:32.474 "thread": "nvmf_tgt_poll_group_000", 00:11:32.474 "listen_address": { 00:11:32.474 "trtype": "TCP", 00:11:32.474 "adrfam": "IPv4", 00:11:32.474 "traddr": "10.0.0.2", 00:11:32.474 "trsvcid": "4420" 00:11:32.474 }, 00:11:32.474 "peer_address": { 00:11:32.474 "trtype": "TCP", 00:11:32.474 "adrfam": "IPv4", 00:11:32.474 "traddr": "10.0.0.1", 00:11:32.474 "trsvcid": "55750" 00:11:32.474 }, 00:11:32.474 "auth": { 00:11:32.474 "state": "completed", 00:11:32.474 "digest": "sha384", 00:11:32.474 "dhgroup": "ffdhe4096" 00:11:32.474 } 00:11:32.474 } 00:11:32.474 ]' 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.474 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.733 19:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:11:33.298 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.298 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:33.298 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.298 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.555 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.555 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.555 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:33.555 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.813 19:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.069 00:11:34.069 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.069 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.069 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.326 { 00:11:34.326 "cntlid": 77, 00:11:34.326 "qid": 0, 00:11:34.326 "state": "enabled", 00:11:34.326 "thread": "nvmf_tgt_poll_group_000", 00:11:34.326 "listen_address": { 00:11:34.326 "trtype": "TCP", 00:11:34.326 "adrfam": "IPv4", 00:11:34.326 "traddr": "10.0.0.2", 00:11:34.326 "trsvcid": "4420" 00:11:34.326 }, 00:11:34.326 "peer_address": { 00:11:34.326 "trtype": "TCP", 00:11:34.326 "adrfam": "IPv4", 00:11:34.326 "traddr": "10.0.0.1", 00:11:34.326 "trsvcid": "55764" 00:11:34.326 }, 00:11:34.326 "auth": { 00:11:34.326 "state": "completed", 00:11:34.326 "digest": "sha384", 00:11:34.326 "dhgroup": "ffdhe4096" 00:11:34.326 } 00:11:34.326 } 00:11:34.326 ]' 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.326 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.583 19:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:35.515 19:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:36.079 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.079 { 00:11:36.079 "cntlid": 79, 00:11:36.079 "qid": 0, 00:11:36.079 "state": "enabled", 00:11:36.079 "thread": "nvmf_tgt_poll_group_000", 00:11:36.079 "listen_address": { 00:11:36.079 "trtype": "TCP", 00:11:36.079 "adrfam": "IPv4", 00:11:36.079 "traddr": "10.0.0.2", 00:11:36.079 "trsvcid": "4420" 00:11:36.079 }, 00:11:36.079 "peer_address": { 00:11:36.079 "trtype": "TCP", 00:11:36.079 "adrfam": "IPv4", 00:11:36.079 "traddr": "10.0.0.1", 00:11:36.079 "trsvcid": "43946" 00:11:36.079 }, 00:11:36.079 "auth": { 00:11:36.079 "state": "completed", 00:11:36.079 "digest": "sha384", 00:11:36.079 "dhgroup": "ffdhe4096" 00:11:36.079 } 00:11:36.079 } 00:11:36.079 ]' 00:11:36.079 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.336 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.336 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.336 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:36.336 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.336 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.336 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.336 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.593 19:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.528 19:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.095 00:11:38.095 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.095 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.095 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.353 { 00:11:38.353 "cntlid": 81, 00:11:38.353 "qid": 0, 00:11:38.353 "state": "enabled", 00:11:38.353 "thread": "nvmf_tgt_poll_group_000", 00:11:38.353 "listen_address": { 00:11:38.353 "trtype": "TCP", 00:11:38.353 "adrfam": "IPv4", 00:11:38.353 "traddr": "10.0.0.2", 00:11:38.353 "trsvcid": "4420" 00:11:38.353 }, 00:11:38.353 "peer_address": { 00:11:38.353 "trtype": "TCP", 00:11:38.353 "adrfam": "IPv4", 00:11:38.353 "traddr": "10.0.0.1", 00:11:38.353 "trsvcid": "43966" 00:11:38.353 }, 00:11:38.353 "auth": { 00:11:38.353 "state": "completed", 00:11:38.353 "digest": "sha384", 00:11:38.353 "dhgroup": "ffdhe6144" 00:11:38.353 } 00:11:38.353 } 00:11:38.353 ]' 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:38.353 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.613 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.613 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.613 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.872 19:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:11:39.443 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.443 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:39.443 19:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.443 19:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.443 19:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.443 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.443 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:39.443 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.701 19:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.702 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.702 19:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.267 00:11:40.267 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.267 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.267 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.267 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.267 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.267 19:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.267 19:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.526 { 00:11:40.526 "cntlid": 83, 00:11:40.526 "qid": 0, 00:11:40.526 "state": "enabled", 00:11:40.526 "thread": "nvmf_tgt_poll_group_000", 00:11:40.526 "listen_address": { 00:11:40.526 "trtype": "TCP", 00:11:40.526 "adrfam": "IPv4", 00:11:40.526 "traddr": "10.0.0.2", 00:11:40.526 "trsvcid": "4420" 00:11:40.526 }, 00:11:40.526 "peer_address": { 00:11:40.526 "trtype": "TCP", 00:11:40.526 "adrfam": "IPv4", 00:11:40.526 "traddr": "10.0.0.1", 00:11:40.526 "trsvcid": "43984" 00:11:40.526 }, 00:11:40.526 "auth": { 00:11:40.526 "state": "completed", 00:11:40.526 "digest": "sha384", 00:11:40.526 "dhgroup": "ffdhe6144" 00:11:40.526 } 00:11:40.526 } 00:11:40.526 ]' 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.526 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.784 19:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:11:41.352 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.352 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:41.352 19:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.352 19:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.352 19:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.352 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.352 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:41.352 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.611 19:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.177 00:11:42.177 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.177 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.177 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.435 { 00:11:42.435 "cntlid": 85, 00:11:42.435 "qid": 0, 00:11:42.435 "state": "enabled", 00:11:42.435 "thread": "nvmf_tgt_poll_group_000", 00:11:42.435 "listen_address": { 00:11:42.435 "trtype": "TCP", 00:11:42.435 "adrfam": "IPv4", 00:11:42.435 "traddr": "10.0.0.2", 00:11:42.435 "trsvcid": "4420" 00:11:42.435 }, 00:11:42.435 "peer_address": { 00:11:42.435 "trtype": "TCP", 00:11:42.435 "adrfam": "IPv4", 00:11:42.435 "traddr": "10.0.0.1", 00:11:42.435 "trsvcid": "44006" 00:11:42.435 }, 00:11:42.435 "auth": { 00:11:42.435 "state": "completed", 00:11:42.435 "digest": "sha384", 00:11:42.435 "dhgroup": "ffdhe6144" 00:11:42.435 } 00:11:42.435 } 00:11:42.435 ]' 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:42.435 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.436 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.436 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.436 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.707 19:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:11:43.303 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.303 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:43.303 19:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.303 19:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.303 19:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.303 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.303 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:43.303 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:43.565 19:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.131 00:11:44.131 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.131 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.131 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.390 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.390 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.390 19:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.390 19:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.390 19:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.390 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.390 { 00:11:44.390 "cntlid": 87, 00:11:44.390 "qid": 0, 00:11:44.390 "state": "enabled", 00:11:44.390 "thread": "nvmf_tgt_poll_group_000", 00:11:44.390 "listen_address": { 00:11:44.390 "trtype": "TCP", 00:11:44.390 "adrfam": "IPv4", 00:11:44.390 "traddr": "10.0.0.2", 00:11:44.390 "trsvcid": "4420" 00:11:44.390 }, 00:11:44.390 "peer_address": { 00:11:44.390 "trtype": "TCP", 00:11:44.390 "adrfam": "IPv4", 00:11:44.390 "traddr": "10.0.0.1", 00:11:44.390 "trsvcid": "44026" 00:11:44.390 }, 00:11:44.390 "auth": { 00:11:44.390 "state": "completed", 00:11:44.390 "digest": "sha384", 00:11:44.390 "dhgroup": "ffdhe6144" 00:11:44.390 } 00:11:44.390 } 00:11:44.390 ]' 00:11:44.390 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.390 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.390 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.649 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:44.649 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.649 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.649 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.649 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.907 19:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:11:45.473 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.473 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:45.473 19:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.473 19:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.473 19:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.473 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:45.473 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.473 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:45.473 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.731 19:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.297 00:11:46.297 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.297 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.297 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.556 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.556 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.556 19:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.556 19:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.556 19:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.556 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:46.556 { 00:11:46.556 "cntlid": 89, 00:11:46.556 "qid": 0, 00:11:46.556 "state": "enabled", 00:11:46.556 "thread": "nvmf_tgt_poll_group_000", 00:11:46.556 "listen_address": { 00:11:46.556 "trtype": "TCP", 00:11:46.556 "adrfam": "IPv4", 00:11:46.556 "traddr": "10.0.0.2", 00:11:46.556 "trsvcid": "4420" 00:11:46.556 }, 00:11:46.556 "peer_address": { 00:11:46.556 "trtype": "TCP", 00:11:46.556 "adrfam": "IPv4", 00:11:46.556 "traddr": "10.0.0.1", 00:11:46.556 "trsvcid": "49800" 00:11:46.556 }, 00:11:46.556 "auth": { 00:11:46.556 "state": "completed", 00:11:46.556 "digest": "sha384", 00:11:46.556 "dhgroup": "ffdhe8192" 00:11:46.556 } 00:11:46.556 } 00:11:46.556 ]' 00:11:46.556 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:46.814 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:46.814 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:46.814 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:46.814 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:46.814 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.814 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.814 19:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.072 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:11:48.007 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.007 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:48.007 19:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.007 19:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.007 19:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.007 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.007 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:48.007 19:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.007 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.941 00:11:48.941 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.941 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.941 19:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.237 { 00:11:49.237 "cntlid": 91, 00:11:49.237 "qid": 0, 00:11:49.237 "state": "enabled", 00:11:49.237 "thread": "nvmf_tgt_poll_group_000", 00:11:49.237 "listen_address": { 00:11:49.237 "trtype": "TCP", 00:11:49.237 "adrfam": "IPv4", 00:11:49.237 "traddr": "10.0.0.2", 00:11:49.237 "trsvcid": "4420" 00:11:49.237 }, 00:11:49.237 "peer_address": { 00:11:49.237 "trtype": "TCP", 00:11:49.237 "adrfam": "IPv4", 00:11:49.237 "traddr": "10.0.0.1", 00:11:49.237 "trsvcid": "49826" 00:11:49.237 }, 00:11:49.237 "auth": { 00:11:49.237 "state": "completed", 00:11:49.237 "digest": "sha384", 00:11:49.237 "dhgroup": "ffdhe8192" 00:11:49.237 } 00:11:49.237 } 00:11:49.237 ]' 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.237 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.506 19:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:11:50.071 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.071 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:50.071 19:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.071 19:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.071 19:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.071 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.071 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:50.071 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.329 19:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.330 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.330 19:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.265 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.265 { 00:11:51.265 "cntlid": 93, 00:11:51.265 "qid": 0, 00:11:51.265 "state": "enabled", 00:11:51.265 "thread": "nvmf_tgt_poll_group_000", 00:11:51.265 "listen_address": { 00:11:51.265 "trtype": "TCP", 00:11:51.265 "adrfam": "IPv4", 00:11:51.265 "traddr": "10.0.0.2", 00:11:51.265 "trsvcid": "4420" 00:11:51.265 }, 00:11:51.265 "peer_address": { 00:11:51.265 "trtype": "TCP", 00:11:51.265 "adrfam": "IPv4", 00:11:51.265 "traddr": "10.0.0.1", 00:11:51.265 "trsvcid": "49850" 00:11:51.265 }, 00:11:51.265 "auth": { 00:11:51.265 "state": "completed", 00:11:51.265 "digest": "sha384", 00:11:51.265 "dhgroup": "ffdhe8192" 00:11:51.265 } 00:11:51.265 } 00:11:51.265 ]' 00:11:51.265 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.523 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.523 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.523 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:51.523 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.523 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.523 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.523 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.782 19:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:11:52.347 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.347 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:52.347 19:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.347 19:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.606 19:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.606 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.606 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:52.606 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.864 19:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:53.431 00:11:53.431 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.431 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.431 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.689 { 00:11:53.689 "cntlid": 95, 00:11:53.689 "qid": 0, 00:11:53.689 "state": "enabled", 00:11:53.689 "thread": "nvmf_tgt_poll_group_000", 00:11:53.689 "listen_address": { 00:11:53.689 "trtype": "TCP", 00:11:53.689 "adrfam": "IPv4", 00:11:53.689 "traddr": "10.0.0.2", 00:11:53.689 "trsvcid": "4420" 00:11:53.689 }, 00:11:53.689 "peer_address": { 00:11:53.689 "trtype": "TCP", 00:11:53.689 "adrfam": "IPv4", 00:11:53.689 "traddr": "10.0.0.1", 00:11:53.689 "trsvcid": "49884" 00:11:53.689 }, 00:11:53.689 "auth": { 00:11:53.689 "state": "completed", 00:11:53.689 "digest": "sha384", 00:11:53.689 "dhgroup": "ffdhe8192" 00:11:53.689 } 00:11:53.689 } 00:11:53.689 ]' 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.689 19:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.948 19:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:54.883 19:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.883 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.141 00:11:55.141 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.141 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.141 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.400 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.400 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.400 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.400 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.400 19:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.400 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.400 { 00:11:55.400 "cntlid": 97, 00:11:55.400 "qid": 0, 00:11:55.400 "state": "enabled", 00:11:55.400 "thread": "nvmf_tgt_poll_group_000", 00:11:55.400 "listen_address": { 00:11:55.400 "trtype": "TCP", 00:11:55.400 "adrfam": "IPv4", 00:11:55.400 "traddr": "10.0.0.2", 00:11:55.400 "trsvcid": "4420" 00:11:55.400 }, 00:11:55.400 "peer_address": { 00:11:55.400 "trtype": "TCP", 00:11:55.400 "adrfam": "IPv4", 00:11:55.400 "traddr": "10.0.0.1", 00:11:55.400 "trsvcid": "49920" 00:11:55.400 }, 00:11:55.400 "auth": { 00:11:55.400 "state": "completed", 00:11:55.400 "digest": "sha512", 00:11:55.400 "dhgroup": "null" 00:11:55.400 } 00:11:55.400 } 00:11:55.400 ]' 00:11:55.400 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.400 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.400 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.658 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:55.658 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.658 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.658 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.658 19:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.916 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:11:56.483 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.483 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:56.483 19:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.483 19:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.483 19:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.483 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.483 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:56.483 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.742 19:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.000 00:11:57.000 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.000 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.000 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.259 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.259 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.259 19:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.259 19:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.259 19:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.259 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.259 { 00:11:57.259 "cntlid": 99, 00:11:57.259 "qid": 0, 00:11:57.259 "state": "enabled", 00:11:57.259 "thread": "nvmf_tgt_poll_group_000", 00:11:57.259 "listen_address": { 00:11:57.259 "trtype": "TCP", 00:11:57.259 "adrfam": "IPv4", 00:11:57.259 "traddr": "10.0.0.2", 00:11:57.259 "trsvcid": "4420" 00:11:57.259 }, 00:11:57.259 "peer_address": { 00:11:57.259 "trtype": "TCP", 00:11:57.259 "adrfam": "IPv4", 00:11:57.259 "traddr": "10.0.0.1", 00:11:57.259 "trsvcid": "47674" 00:11:57.259 }, 00:11:57.259 "auth": { 00:11:57.259 "state": "completed", 00:11:57.259 "digest": "sha512", 00:11:57.259 "dhgroup": "null" 00:11:57.259 } 00:11:57.259 } 00:11:57.259 ]' 00:11:57.259 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.517 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.517 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.517 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:57.517 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.517 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.517 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.517 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.774 19:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:11:58.342 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.342 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:11:58.342 19:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.342 19:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.342 19:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.342 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.342 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.342 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.600 19:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.858 00:11:59.117 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.117 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.117 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.376 { 00:11:59.376 "cntlid": 101, 00:11:59.376 "qid": 0, 00:11:59.376 "state": "enabled", 00:11:59.376 "thread": "nvmf_tgt_poll_group_000", 00:11:59.376 "listen_address": { 00:11:59.376 "trtype": "TCP", 00:11:59.376 "adrfam": "IPv4", 00:11:59.376 "traddr": "10.0.0.2", 00:11:59.376 "trsvcid": "4420" 00:11:59.376 }, 00:11:59.376 "peer_address": { 00:11:59.376 "trtype": "TCP", 00:11:59.376 "adrfam": "IPv4", 00:11:59.376 "traddr": "10.0.0.1", 00:11:59.376 "trsvcid": "47704" 00:11:59.376 }, 00:11:59.376 "auth": { 00:11:59.376 "state": "completed", 00:11:59.376 "digest": "sha512", 00:11:59.376 "dhgroup": "null" 00:11:59.376 } 00:11:59.376 } 00:11:59.376 ]' 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.376 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.634 19:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.570 19:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.137 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.137 { 00:12:01.137 "cntlid": 103, 00:12:01.137 "qid": 0, 00:12:01.137 "state": "enabled", 00:12:01.137 "thread": "nvmf_tgt_poll_group_000", 00:12:01.137 "listen_address": { 00:12:01.137 "trtype": "TCP", 00:12:01.137 "adrfam": "IPv4", 00:12:01.137 "traddr": "10.0.0.2", 00:12:01.137 "trsvcid": "4420" 00:12:01.137 }, 00:12:01.137 "peer_address": { 00:12:01.137 "trtype": "TCP", 00:12:01.137 "adrfam": "IPv4", 00:12:01.137 "traddr": "10.0.0.1", 00:12:01.137 "trsvcid": "47736" 00:12:01.137 }, 00:12:01.137 "auth": { 00:12:01.137 "state": "completed", 00:12:01.137 "digest": "sha512", 00:12:01.137 "dhgroup": "null" 00:12:01.137 } 00:12:01.137 } 00:12:01.137 ]' 00:12:01.137 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.396 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.396 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.396 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:01.396 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.396 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.396 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.396 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.655 19:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:12:02.221 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.222 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:02.222 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.222 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.222 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.222 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:02.222 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.222 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.222 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.480 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.738 00:12:02.995 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.995 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.996 19:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.254 { 00:12:03.254 "cntlid": 105, 00:12:03.254 "qid": 0, 00:12:03.254 "state": "enabled", 00:12:03.254 "thread": "nvmf_tgt_poll_group_000", 00:12:03.254 "listen_address": { 00:12:03.254 "trtype": "TCP", 00:12:03.254 "adrfam": "IPv4", 00:12:03.254 "traddr": "10.0.0.2", 00:12:03.254 "trsvcid": "4420" 00:12:03.254 }, 00:12:03.254 "peer_address": { 00:12:03.254 "trtype": "TCP", 00:12:03.254 "adrfam": "IPv4", 00:12:03.254 "traddr": "10.0.0.1", 00:12:03.254 "trsvcid": "47764" 00:12:03.254 }, 00:12:03.254 "auth": { 00:12:03.254 "state": "completed", 00:12:03.254 "digest": "sha512", 00:12:03.254 "dhgroup": "ffdhe2048" 00:12:03.254 } 00:12:03.254 } 00:12:03.254 ]' 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.254 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.512 19:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:12:04.135 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.135 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:04.135 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.135 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.135 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.135 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.135 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.135 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.394 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.961 00:12:04.961 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.961 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.961 19:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.961 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.961 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.961 19:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.961 19:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.961 19:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.961 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.961 { 00:12:04.961 "cntlid": 107, 00:12:04.961 "qid": 0, 00:12:04.961 "state": "enabled", 00:12:04.961 "thread": "nvmf_tgt_poll_group_000", 00:12:04.961 "listen_address": { 00:12:04.961 "trtype": "TCP", 00:12:04.961 "adrfam": "IPv4", 00:12:04.961 "traddr": "10.0.0.2", 00:12:04.961 "trsvcid": "4420" 00:12:04.961 }, 00:12:04.961 "peer_address": { 00:12:04.961 "trtype": "TCP", 00:12:04.961 "adrfam": "IPv4", 00:12:04.961 "traddr": "10.0.0.1", 00:12:04.961 "trsvcid": "47786" 00:12:04.961 }, 00:12:04.961 "auth": { 00:12:04.961 "state": "completed", 00:12:04.961 "digest": "sha512", 00:12:04.961 "dhgroup": "ffdhe2048" 00:12:04.961 } 00:12:04.961 } 00:12:04.961 ]' 00:12:04.961 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.219 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.219 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.219 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:05.219 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.219 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.219 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.219 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.478 19:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:12:06.044 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.304 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:06.304 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.304 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.304 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.304 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.304 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:06.304 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.562 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.820 00:12:06.820 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.820 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.820 19:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.077 { 00:12:07.077 "cntlid": 109, 00:12:07.077 "qid": 0, 00:12:07.077 "state": "enabled", 00:12:07.077 "thread": "nvmf_tgt_poll_group_000", 00:12:07.077 "listen_address": { 00:12:07.077 "trtype": "TCP", 00:12:07.077 "adrfam": "IPv4", 00:12:07.077 "traddr": "10.0.0.2", 00:12:07.077 "trsvcid": "4420" 00:12:07.077 }, 00:12:07.077 "peer_address": { 00:12:07.077 "trtype": "TCP", 00:12:07.077 "adrfam": "IPv4", 00:12:07.077 "traddr": "10.0.0.1", 00:12:07.077 "trsvcid": "44384" 00:12:07.077 }, 00:12:07.077 "auth": { 00:12:07.077 "state": "completed", 00:12:07.077 "digest": "sha512", 00:12:07.077 "dhgroup": "ffdhe2048" 00:12:07.077 } 00:12:07.077 } 00:12:07.077 ]' 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:07.077 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.337 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.337 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.337 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.596 19:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:12:08.162 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.162 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:08.162 19:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.162 19:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.162 19:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.162 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.162 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:08.162 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:08.420 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:08.420 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.420 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:08.420 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:08.420 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:08.420 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.420 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:12:08.420 19:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.420 19:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.421 19:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.421 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.421 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.679 00:12:08.679 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.679 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.679 19:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.937 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.937 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.937 19:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.937 19:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.937 19:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.196 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.196 { 00:12:09.196 "cntlid": 111, 00:12:09.196 "qid": 0, 00:12:09.196 "state": "enabled", 00:12:09.196 "thread": "nvmf_tgt_poll_group_000", 00:12:09.196 "listen_address": { 00:12:09.196 "trtype": "TCP", 00:12:09.196 "adrfam": "IPv4", 00:12:09.196 "traddr": "10.0.0.2", 00:12:09.196 "trsvcid": "4420" 00:12:09.196 }, 00:12:09.196 "peer_address": { 00:12:09.196 "trtype": "TCP", 00:12:09.196 "adrfam": "IPv4", 00:12:09.196 "traddr": "10.0.0.1", 00:12:09.196 "trsvcid": "44420" 00:12:09.196 }, 00:12:09.196 "auth": { 00:12:09.196 "state": "completed", 00:12:09.196 "digest": "sha512", 00:12:09.196 "dhgroup": "ffdhe2048" 00:12:09.196 } 00:12:09.196 } 00:12:09.196 ]' 00:12:09.196 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.196 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.196 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.196 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:09.196 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.196 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.196 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.196 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.455 19:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:12:10.021 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.021 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:10.021 19:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.021 19:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.021 19:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.021 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.021 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.021 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:10.021 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.280 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.579 00:12:10.579 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.579 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.579 19:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.838 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.838 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.838 19:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.838 19:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.838 19:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.838 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.838 { 00:12:10.838 "cntlid": 113, 00:12:10.838 "qid": 0, 00:12:10.838 "state": "enabled", 00:12:10.838 "thread": "nvmf_tgt_poll_group_000", 00:12:10.838 "listen_address": { 00:12:10.838 "trtype": "TCP", 00:12:10.838 "adrfam": "IPv4", 00:12:10.838 "traddr": "10.0.0.2", 00:12:10.838 "trsvcid": "4420" 00:12:10.838 }, 00:12:10.838 "peer_address": { 00:12:10.838 "trtype": "TCP", 00:12:10.838 "adrfam": "IPv4", 00:12:10.838 "traddr": "10.0.0.1", 00:12:10.838 "trsvcid": "44454" 00:12:10.838 }, 00:12:10.838 "auth": { 00:12:10.838 "state": "completed", 00:12:10.838 "digest": "sha512", 00:12:10.838 "dhgroup": "ffdhe3072" 00:12:10.838 } 00:12:10.838 } 00:12:10.838 ]' 00:12:10.838 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.838 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.838 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.096 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:11.096 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.096 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.096 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.096 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.354 19:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:12:11.923 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.923 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:11.923 19:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.923 19:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.923 19:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.923 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.923 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.923 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.182 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.750 00:12:12.750 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.750 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.750 19:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.009 { 00:12:13.009 "cntlid": 115, 00:12:13.009 "qid": 0, 00:12:13.009 "state": "enabled", 00:12:13.009 "thread": "nvmf_tgt_poll_group_000", 00:12:13.009 "listen_address": { 00:12:13.009 "trtype": "TCP", 00:12:13.009 "adrfam": "IPv4", 00:12:13.009 "traddr": "10.0.0.2", 00:12:13.009 "trsvcid": "4420" 00:12:13.009 }, 00:12:13.009 "peer_address": { 00:12:13.009 "trtype": "TCP", 00:12:13.009 "adrfam": "IPv4", 00:12:13.009 "traddr": "10.0.0.1", 00:12:13.009 "trsvcid": "44480" 00:12:13.009 }, 00:12:13.009 "auth": { 00:12:13.009 "state": "completed", 00:12:13.009 "digest": "sha512", 00:12:13.009 "dhgroup": "ffdhe3072" 00:12:13.009 } 00:12:13.009 } 00:12:13.009 ]' 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:13.009 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.268 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.268 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.268 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.527 19:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:12:14.096 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.096 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:14.096 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.096 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.096 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.096 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.096 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:14.096 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.355 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.613 00:12:14.924 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.924 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.924 19:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.924 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.183 { 00:12:15.183 "cntlid": 117, 00:12:15.183 "qid": 0, 00:12:15.183 "state": "enabled", 00:12:15.183 "thread": "nvmf_tgt_poll_group_000", 00:12:15.183 "listen_address": { 00:12:15.183 "trtype": "TCP", 00:12:15.183 "adrfam": "IPv4", 00:12:15.183 "traddr": "10.0.0.2", 00:12:15.183 "trsvcid": "4420" 00:12:15.183 }, 00:12:15.183 "peer_address": { 00:12:15.183 "trtype": "TCP", 00:12:15.183 "adrfam": "IPv4", 00:12:15.183 "traddr": "10.0.0.1", 00:12:15.183 "trsvcid": "44514" 00:12:15.183 }, 00:12:15.183 "auth": { 00:12:15.183 "state": "completed", 00:12:15.183 "digest": "sha512", 00:12:15.183 "dhgroup": "ffdhe3072" 00:12:15.183 } 00:12:15.183 } 00:12:15.183 ]' 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.183 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.441 19:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.372 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.938 00:12:16.938 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.938 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.938 19:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.938 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.938 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.938 19:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.938 19:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.938 19:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.938 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.938 { 00:12:16.938 "cntlid": 119, 00:12:16.938 "qid": 0, 00:12:16.938 "state": "enabled", 00:12:16.938 "thread": "nvmf_tgt_poll_group_000", 00:12:16.938 "listen_address": { 00:12:16.938 "trtype": "TCP", 00:12:16.938 "adrfam": "IPv4", 00:12:16.938 "traddr": "10.0.0.2", 00:12:16.938 "trsvcid": "4420" 00:12:16.938 }, 00:12:16.938 "peer_address": { 00:12:16.938 "trtype": "TCP", 00:12:16.938 "adrfam": "IPv4", 00:12:16.938 "traddr": "10.0.0.1", 00:12:16.938 "trsvcid": "38316" 00:12:16.938 }, 00:12:16.938 "auth": { 00:12:16.938 "state": "completed", 00:12:16.938 "digest": "sha512", 00:12:16.938 "dhgroup": "ffdhe3072" 00:12:16.938 } 00:12:16.938 } 00:12:16.938 ]' 00:12:16.938 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.196 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.196 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.196 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:17.196 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.196 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.196 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.196 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.454 19:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:12:18.018 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.018 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:18.018 19:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.018 19:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.018 19:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.018 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:18.019 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.019 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:18.019 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.276 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.842 00:12:18.842 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.842 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.842 19:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.102 { 00:12:19.102 "cntlid": 121, 00:12:19.102 "qid": 0, 00:12:19.102 "state": "enabled", 00:12:19.102 "thread": "nvmf_tgt_poll_group_000", 00:12:19.102 "listen_address": { 00:12:19.102 "trtype": "TCP", 00:12:19.102 "adrfam": "IPv4", 00:12:19.102 "traddr": "10.0.0.2", 00:12:19.102 "trsvcid": "4420" 00:12:19.102 }, 00:12:19.102 "peer_address": { 00:12:19.102 "trtype": "TCP", 00:12:19.102 "adrfam": "IPv4", 00:12:19.102 "traddr": "10.0.0.1", 00:12:19.102 "trsvcid": "38340" 00:12:19.102 }, 00:12:19.102 "auth": { 00:12:19.102 "state": "completed", 00:12:19.102 "digest": "sha512", 00:12:19.102 "dhgroup": "ffdhe4096" 00:12:19.102 } 00:12:19.102 } 00:12:19.102 ]' 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.102 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.360 19:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:12:19.927 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.927 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:19.927 19:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.927 19:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.927 19:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.927 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.927 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:19.927 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.186 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.187 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.802 00:12:20.802 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.802 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.802 19:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.082 { 00:12:21.082 "cntlid": 123, 00:12:21.082 "qid": 0, 00:12:21.082 "state": "enabled", 00:12:21.082 "thread": "nvmf_tgt_poll_group_000", 00:12:21.082 "listen_address": { 00:12:21.082 "trtype": "TCP", 00:12:21.082 "adrfam": "IPv4", 00:12:21.082 "traddr": "10.0.0.2", 00:12:21.082 "trsvcid": "4420" 00:12:21.082 }, 00:12:21.082 "peer_address": { 00:12:21.082 "trtype": "TCP", 00:12:21.082 "adrfam": "IPv4", 00:12:21.082 "traddr": "10.0.0.1", 00:12:21.082 "trsvcid": "38364" 00:12:21.082 }, 00:12:21.082 "auth": { 00:12:21.082 "state": "completed", 00:12:21.082 "digest": "sha512", 00:12:21.082 "dhgroup": "ffdhe4096" 00:12:21.082 } 00:12:21.082 } 00:12:21.082 ]' 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.082 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.341 19:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.279 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.847 00:12:22.847 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.847 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.847 19:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.107 { 00:12:23.107 "cntlid": 125, 00:12:23.107 "qid": 0, 00:12:23.107 "state": "enabled", 00:12:23.107 "thread": "nvmf_tgt_poll_group_000", 00:12:23.107 "listen_address": { 00:12:23.107 "trtype": "TCP", 00:12:23.107 "adrfam": "IPv4", 00:12:23.107 "traddr": "10.0.0.2", 00:12:23.107 "trsvcid": "4420" 00:12:23.107 }, 00:12:23.107 "peer_address": { 00:12:23.107 "trtype": "TCP", 00:12:23.107 "adrfam": "IPv4", 00:12:23.107 "traddr": "10.0.0.1", 00:12:23.107 "trsvcid": "38398" 00:12:23.107 }, 00:12:23.107 "auth": { 00:12:23.107 "state": "completed", 00:12:23.107 "digest": "sha512", 00:12:23.107 "dhgroup": "ffdhe4096" 00:12:23.107 } 00:12:23.107 } 00:12:23.107 ]' 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.107 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.366 19:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:12:23.934 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.934 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:23.934 19:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.934 19:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.934 19:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.934 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.934 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:23.934 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:24.502 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:24.502 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.502 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:24.502 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:24.502 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:24.502 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.502 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:12:24.502 19:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.503 19:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.503 19:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.503 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.503 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.761 00:12:24.761 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.761 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.761 19:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.020 { 00:12:25.020 "cntlid": 127, 00:12:25.020 "qid": 0, 00:12:25.020 "state": "enabled", 00:12:25.020 "thread": "nvmf_tgt_poll_group_000", 00:12:25.020 "listen_address": { 00:12:25.020 "trtype": "TCP", 00:12:25.020 "adrfam": "IPv4", 00:12:25.020 "traddr": "10.0.0.2", 00:12:25.020 "trsvcid": "4420" 00:12:25.020 }, 00:12:25.020 "peer_address": { 00:12:25.020 "trtype": "TCP", 00:12:25.020 "adrfam": "IPv4", 00:12:25.020 "traddr": "10.0.0.1", 00:12:25.020 "trsvcid": "38424" 00:12:25.020 }, 00:12:25.020 "auth": { 00:12:25.020 "state": "completed", 00:12:25.020 "digest": "sha512", 00:12:25.020 "dhgroup": "ffdhe4096" 00:12:25.020 } 00:12:25.020 } 00:12:25.020 ]' 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:25.020 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.280 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.280 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.280 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.280 19:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:12:26.216 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.216 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:26.216 19:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.216 19:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.216 19:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.216 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:26.216 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.216 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.216 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.475 19:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.043 00:12:27.043 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.043 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.043 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.302 { 00:12:27.302 "cntlid": 129, 00:12:27.302 "qid": 0, 00:12:27.302 "state": "enabled", 00:12:27.302 "thread": "nvmf_tgt_poll_group_000", 00:12:27.302 "listen_address": { 00:12:27.302 "trtype": "TCP", 00:12:27.302 "adrfam": "IPv4", 00:12:27.302 "traddr": "10.0.0.2", 00:12:27.302 "trsvcid": "4420" 00:12:27.302 }, 00:12:27.302 "peer_address": { 00:12:27.302 "trtype": "TCP", 00:12:27.302 "adrfam": "IPv4", 00:12:27.302 "traddr": "10.0.0.1", 00:12:27.302 "trsvcid": "41740" 00:12:27.302 }, 00:12:27.302 "auth": { 00:12:27.302 "state": "completed", 00:12:27.302 "digest": "sha512", 00:12:27.302 "dhgroup": "ffdhe6144" 00:12:27.302 } 00:12:27.302 } 00:12:27.302 ]' 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.302 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.561 19:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:12:28.499 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.499 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:28.499 19:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.499 19:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.499 19:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.499 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.499 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.499 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.758 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.759 19:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.018 00:12:29.018 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.018 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.018 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.276 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.276 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.276 19:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.276 19:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.276 19:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.276 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.276 { 00:12:29.276 "cntlid": 131, 00:12:29.276 "qid": 0, 00:12:29.276 "state": "enabled", 00:12:29.276 "thread": "nvmf_tgt_poll_group_000", 00:12:29.276 "listen_address": { 00:12:29.276 "trtype": "TCP", 00:12:29.276 "adrfam": "IPv4", 00:12:29.276 "traddr": "10.0.0.2", 00:12:29.276 "trsvcid": "4420" 00:12:29.276 }, 00:12:29.276 "peer_address": { 00:12:29.276 "trtype": "TCP", 00:12:29.276 "adrfam": "IPv4", 00:12:29.276 "traddr": "10.0.0.1", 00:12:29.276 "trsvcid": "41782" 00:12:29.276 }, 00:12:29.276 "auth": { 00:12:29.276 "state": "completed", 00:12:29.276 "digest": "sha512", 00:12:29.276 "dhgroup": "ffdhe6144" 00:12:29.276 } 00:12:29.276 } 00:12:29.276 ]' 00:12:29.276 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.535 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.535 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.535 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:29.535 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.535 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.535 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.535 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.794 19:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:12:30.361 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.361 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:30.361 19:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.361 19:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.361 19:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.361 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.361 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.361 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.929 19:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.188 00:12:31.188 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.188 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.188 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.448 { 00:12:31.448 "cntlid": 133, 00:12:31.448 "qid": 0, 00:12:31.448 "state": "enabled", 00:12:31.448 "thread": "nvmf_tgt_poll_group_000", 00:12:31.448 "listen_address": { 00:12:31.448 "trtype": "TCP", 00:12:31.448 "adrfam": "IPv4", 00:12:31.448 "traddr": "10.0.0.2", 00:12:31.448 "trsvcid": "4420" 00:12:31.448 }, 00:12:31.448 "peer_address": { 00:12:31.448 "trtype": "TCP", 00:12:31.448 "adrfam": "IPv4", 00:12:31.448 "traddr": "10.0.0.1", 00:12:31.448 "trsvcid": "41822" 00:12:31.448 }, 00:12:31.448 "auth": { 00:12:31.448 "state": "completed", 00:12:31.448 "digest": "sha512", 00:12:31.448 "dhgroup": "ffdhe6144" 00:12:31.448 } 00:12:31.448 } 00:12:31.448 ]' 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:31.448 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.708 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.708 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.708 19:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.968 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:12:32.534 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.534 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:32.534 19:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.534 19:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.534 19:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.534 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.534 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:32.534 19:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:32.792 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:32.792 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.792 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:32.792 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:32.792 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:32.792 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.792 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:12:32.792 19:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.050 19:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.050 19:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.050 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.050 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.310 00:12:33.568 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.568 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.568 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.827 { 00:12:33.827 "cntlid": 135, 00:12:33.827 "qid": 0, 00:12:33.827 "state": "enabled", 00:12:33.827 "thread": "nvmf_tgt_poll_group_000", 00:12:33.827 "listen_address": { 00:12:33.827 "trtype": "TCP", 00:12:33.827 "adrfam": "IPv4", 00:12:33.827 "traddr": "10.0.0.2", 00:12:33.827 "trsvcid": "4420" 00:12:33.827 }, 00:12:33.827 "peer_address": { 00:12:33.827 "trtype": "TCP", 00:12:33.827 "adrfam": "IPv4", 00:12:33.827 "traddr": "10.0.0.1", 00:12:33.827 "trsvcid": "41848" 00:12:33.827 }, 00:12:33.827 "auth": { 00:12:33.827 "state": "completed", 00:12:33.827 "digest": "sha512", 00:12:33.827 "dhgroup": "ffdhe6144" 00:12:33.827 } 00:12:33.827 } 00:12:33.827 ]' 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:33.827 19:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.827 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.827 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.827 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.086 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:12:35.021 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.021 19:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:35.021 19:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.021 19:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.021 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.588 00:12:35.847 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.847 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.847 19:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.105 { 00:12:36.105 "cntlid": 137, 00:12:36.105 "qid": 0, 00:12:36.105 "state": "enabled", 00:12:36.105 "thread": "nvmf_tgt_poll_group_000", 00:12:36.105 "listen_address": { 00:12:36.105 "trtype": "TCP", 00:12:36.105 "adrfam": "IPv4", 00:12:36.105 "traddr": "10.0.0.2", 00:12:36.105 "trsvcid": "4420" 00:12:36.105 }, 00:12:36.105 "peer_address": { 00:12:36.105 "trtype": "TCP", 00:12:36.105 "adrfam": "IPv4", 00:12:36.105 "traddr": "10.0.0.1", 00:12:36.105 "trsvcid": "37696" 00:12:36.105 }, 00:12:36.105 "auth": { 00:12:36.105 "state": "completed", 00:12:36.105 "digest": "sha512", 00:12:36.105 "dhgroup": "ffdhe8192" 00:12:36.105 } 00:12:36.105 } 00:12:36.105 ]' 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.105 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.363 19:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:12:36.927 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.185 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:37.185 19:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.185 19:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.185 19:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.185 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.185 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.185 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.442 19:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.007 00:12:38.007 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.007 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.007 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.347 { 00:12:38.347 "cntlid": 139, 00:12:38.347 "qid": 0, 00:12:38.347 "state": "enabled", 00:12:38.347 "thread": "nvmf_tgt_poll_group_000", 00:12:38.347 "listen_address": { 00:12:38.347 "trtype": "TCP", 00:12:38.347 "adrfam": "IPv4", 00:12:38.347 "traddr": "10.0.0.2", 00:12:38.347 "trsvcid": "4420" 00:12:38.347 }, 00:12:38.347 "peer_address": { 00:12:38.347 "trtype": "TCP", 00:12:38.347 "adrfam": "IPv4", 00:12:38.347 "traddr": "10.0.0.1", 00:12:38.347 "trsvcid": "37710" 00:12:38.347 }, 00:12:38.347 "auth": { 00:12:38.347 "state": "completed", 00:12:38.347 "digest": "sha512", 00:12:38.347 "dhgroup": "ffdhe8192" 00:12:38.347 } 00:12:38.347 } 00:12:38.347 ]' 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:38.347 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.606 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.606 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.606 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.863 19:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:01:MjA2MzA3OGJmNGU3YzNmNDM0MzZhMzI3OGRkYTEzZGZ5f+mJ: --dhchap-ctrl-secret DHHC-1:02:MDgxZDM2YjczNDEwNDU5ODUzYTJjYzE3MjgyYTBmNDhkYTllZDgxNmZjMjkyNzlk6aZniw==: 00:12:39.427 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.427 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:39.427 19:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.427 19:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.427 19:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.427 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.427 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:39.427 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.994 19:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.561 00:12:40.561 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.561 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.561 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.820 { 00:12:40.820 "cntlid": 141, 00:12:40.820 "qid": 0, 00:12:40.820 "state": "enabled", 00:12:40.820 "thread": "nvmf_tgt_poll_group_000", 00:12:40.820 "listen_address": { 00:12:40.820 "trtype": "TCP", 00:12:40.820 "adrfam": "IPv4", 00:12:40.820 "traddr": "10.0.0.2", 00:12:40.820 "trsvcid": "4420" 00:12:40.820 }, 00:12:40.820 "peer_address": { 00:12:40.820 "trtype": "TCP", 00:12:40.820 "adrfam": "IPv4", 00:12:40.820 "traddr": "10.0.0.1", 00:12:40.820 "trsvcid": "37748" 00:12:40.820 }, 00:12:40.820 "auth": { 00:12:40.820 "state": "completed", 00:12:40.820 "digest": "sha512", 00:12:40.820 "dhgroup": "ffdhe8192" 00:12:40.820 } 00:12:40.820 } 00:12:40.820 ]' 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.820 19:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.820 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.820 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.820 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.079 19:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:02:NTljNWZlMzE2YTYzODdmYzQwMjY4YTBmMWVlODZlYzZlNmUxYjkyZDY1ODNkMGJh9U644A==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNmQyN2IxOWIwODRkMzIxMjEzMzg4ZWMzNWUwOGUiTO+a: 00:12:42.014 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.014 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:42.014 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.014 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.014 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.014 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.014 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:42.014 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.272 19:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.840 00:12:42.840 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.840 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.840 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.098 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.098 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.098 19:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.098 19:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.098 19:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.098 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.098 { 00:12:43.098 "cntlid": 143, 00:12:43.098 "qid": 0, 00:12:43.098 "state": "enabled", 00:12:43.098 "thread": "nvmf_tgt_poll_group_000", 00:12:43.098 "listen_address": { 00:12:43.098 "trtype": "TCP", 00:12:43.098 "adrfam": "IPv4", 00:12:43.098 "traddr": "10.0.0.2", 00:12:43.098 "trsvcid": "4420" 00:12:43.098 }, 00:12:43.098 "peer_address": { 00:12:43.098 "trtype": "TCP", 00:12:43.098 "adrfam": "IPv4", 00:12:43.098 "traddr": "10.0.0.1", 00:12:43.098 "trsvcid": "37782" 00:12:43.098 }, 00:12:43.098 "auth": { 00:12:43.098 "state": "completed", 00:12:43.098 "digest": "sha512", 00:12:43.098 "dhgroup": "ffdhe8192" 00:12:43.098 } 00:12:43.098 } 00:12:43.098 ]' 00:12:43.098 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.098 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.098 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.357 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:43.357 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.357 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.357 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.357 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.616 19:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:44.185 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.752 19:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.320 00:12:45.320 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.320 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.320 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.579 { 00:12:45.579 "cntlid": 145, 00:12:45.579 "qid": 0, 00:12:45.579 "state": "enabled", 00:12:45.579 "thread": "nvmf_tgt_poll_group_000", 00:12:45.579 "listen_address": { 00:12:45.579 "trtype": "TCP", 00:12:45.579 "adrfam": "IPv4", 00:12:45.579 "traddr": "10.0.0.2", 00:12:45.579 "trsvcid": "4420" 00:12:45.579 }, 00:12:45.579 "peer_address": { 00:12:45.579 "trtype": "TCP", 00:12:45.579 "adrfam": "IPv4", 00:12:45.579 "traddr": "10.0.0.1", 00:12:45.579 "trsvcid": "37822" 00:12:45.579 }, 00:12:45.579 "auth": { 00:12:45.579 "state": "completed", 00:12:45.579 "digest": "sha512", 00:12:45.579 "dhgroup": "ffdhe8192" 00:12:45.579 } 00:12:45.579 } 00:12:45.579 ]' 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.579 19:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.146 19:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:00:NGUxYmNkNDk4N2JmNjc4ODhlNWFmYWU2MDM5ZDI1M2YwMDFlZGQzNDllNzAxYWRiD0mzYA==: --dhchap-ctrl-secret DHHC-1:03:MDllYjkzNDc1ODc0YzMwNzU0MGNjYzQ5NjNjOTljODdhNDQzMmE2MzEzMjAwNDc3YTJiMDAzNTQ0OWZhNjA2Md4WOuA=: 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:46.714 19:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:47.282 request: 00:12:47.282 { 00:12:47.282 "name": "nvme0", 00:12:47.282 "trtype": "tcp", 00:12:47.282 "traddr": "10.0.0.2", 00:12:47.282 "adrfam": "ipv4", 00:12:47.282 "trsvcid": "4420", 00:12:47.282 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:47.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c", 00:12:47.282 "prchk_reftag": false, 00:12:47.282 "prchk_guard": false, 00:12:47.282 "hdgst": false, 00:12:47.282 "ddgst": false, 00:12:47.282 "dhchap_key": "key2", 00:12:47.282 "method": "bdev_nvme_attach_controller", 00:12:47.282 "req_id": 1 00:12:47.282 } 00:12:47.282 Got JSON-RPC error response 00:12:47.282 response: 00:12:47.282 { 00:12:47.282 "code": -5, 00:12:47.282 "message": "Input/output error" 00:12:47.282 } 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.282 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:47.283 19:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:47.851 request: 00:12:47.851 { 00:12:47.851 "name": "nvme0", 00:12:47.851 "trtype": "tcp", 00:12:47.851 "traddr": "10.0.0.2", 00:12:47.851 "adrfam": "ipv4", 00:12:47.851 "trsvcid": "4420", 00:12:47.851 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:47.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c", 00:12:47.851 "prchk_reftag": false, 00:12:47.851 "prchk_guard": false, 00:12:47.851 "hdgst": false, 00:12:47.851 "ddgst": false, 00:12:47.851 "dhchap_key": "key1", 00:12:47.851 "dhchap_ctrlr_key": "ckey2", 00:12:47.851 "method": "bdev_nvme_attach_controller", 00:12:47.851 "req_id": 1 00:12:47.851 } 00:12:47.851 Got JSON-RPC error response 00:12:47.851 response: 00:12:47.851 { 00:12:47.851 "code": -5, 00:12:47.851 "message": "Input/output error" 00:12:47.851 } 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key1 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.851 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.439 request: 00:12:48.439 { 00:12:48.439 "name": "nvme0", 00:12:48.439 "trtype": "tcp", 00:12:48.439 "traddr": "10.0.0.2", 00:12:48.439 "adrfam": "ipv4", 00:12:48.439 "trsvcid": "4420", 00:12:48.439 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:48.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c", 00:12:48.439 "prchk_reftag": false, 00:12:48.439 "prchk_guard": false, 00:12:48.439 "hdgst": false, 00:12:48.439 "ddgst": false, 00:12:48.439 "dhchap_key": "key1", 00:12:48.439 "dhchap_ctrlr_key": "ckey1", 00:12:48.439 "method": "bdev_nvme_attach_controller", 00:12:48.439 "req_id": 1 00:12:48.439 } 00:12:48.439 Got JSON-RPC error response 00:12:48.439 response: 00:12:48.439 { 00:12:48.439 "code": -5, 00:12:48.439 "message": "Input/output error" 00:12:48.439 } 00:12:48.439 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:48.439 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:48.439 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:48.439 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:48.439 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:48.439 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.439 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.439 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.439 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69396 00:12:48.440 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69396 ']' 00:12:48.440 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69396 00:12:48.440 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:48.440 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:48.440 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69396 00:12:48.698 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:48.698 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:48.698 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69396' 00:12:48.698 killing process with pid 69396 00:12:48.698 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69396 00:12:48.698 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69396 00:12:48.698 19:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:48.698 19:50:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.698 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.699 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.957 19:50:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72424 00:12:48.957 19:50:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:48.957 19:50:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72424 00:12:48.957 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72424 ']' 00:12:48.957 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.957 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.957 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.957 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.957 19:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.889 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.889 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:49.889 19:50:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.889 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.889 19:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.889 19:50:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.889 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:49.889 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72424 00:12:49.889 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72424 ']' 00:12:49.889 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.889 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.889 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.889 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.889 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.148 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.148 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:50.148 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:50.148 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.148 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.408 19:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.976 00:12:50.976 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.976 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.976 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.236 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.236 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.236 19:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.236 19:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.236 19:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.236 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.236 { 00:12:51.236 "cntlid": 1, 00:12:51.236 "qid": 0, 00:12:51.236 "state": "enabled", 00:12:51.236 "thread": "nvmf_tgt_poll_group_000", 00:12:51.236 "listen_address": { 00:12:51.236 "trtype": "TCP", 00:12:51.236 "adrfam": "IPv4", 00:12:51.236 "traddr": "10.0.0.2", 00:12:51.236 "trsvcid": "4420" 00:12:51.236 }, 00:12:51.236 "peer_address": { 00:12:51.236 "trtype": "TCP", 00:12:51.236 "adrfam": "IPv4", 00:12:51.236 "traddr": "10.0.0.1", 00:12:51.236 "trsvcid": "52776" 00:12:51.236 }, 00:12:51.236 "auth": { 00:12:51.236 "state": "completed", 00:12:51.236 "digest": "sha512", 00:12:51.236 "dhgroup": "ffdhe8192" 00:12:51.236 } 00:12:51.236 } 00:12:51.236 ]' 00:12:51.236 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.495 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.495 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.495 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:51.495 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.495 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.495 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.495 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.755 19:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-secret DHHC-1:03:YTA4ZTI4NTFlMGRlYjI4Mzc4NDNlOGYwYTRiYzMxYjViNzk0NDA1ZmQ2OGIyZmQ3MDM1YzJiMWM0ZDkxM2U3ZPAVHUY=: 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --dhchap-key key3 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:52.691 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:52.692 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:52.692 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:52.692 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:52.692 19:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:52.692 19:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:52.950 request: 00:12:52.950 { 00:12:52.950 "name": "nvme0", 00:12:52.950 "trtype": "tcp", 00:12:52.950 "traddr": "10.0.0.2", 00:12:52.950 "adrfam": "ipv4", 00:12:52.950 "trsvcid": "4420", 00:12:52.950 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:52.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c", 00:12:52.950 "prchk_reftag": false, 00:12:52.950 "prchk_guard": false, 00:12:52.950 "hdgst": false, 00:12:52.950 "ddgst": false, 00:12:52.950 "dhchap_key": "key3", 00:12:52.950 "method": "bdev_nvme_attach_controller", 00:12:52.950 "req_id": 1 00:12:52.950 } 00:12:52.950 Got JSON-RPC error response 00:12:52.950 response: 00:12:52.950 { 00:12:52.950 "code": -5, 00:12:52.950 "message": "Input/output error" 00:12:52.950 } 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.209 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.468 request: 00:12:53.468 { 00:12:53.468 "name": "nvme0", 00:12:53.468 "trtype": "tcp", 00:12:53.468 "traddr": "10.0.0.2", 00:12:53.468 "adrfam": "ipv4", 00:12:53.468 "trsvcid": "4420", 00:12:53.468 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:53.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c", 00:12:53.468 "prchk_reftag": false, 00:12:53.468 "prchk_guard": false, 00:12:53.468 "hdgst": false, 00:12:53.468 "ddgst": false, 00:12:53.468 "dhchap_key": "key3", 00:12:53.468 "method": "bdev_nvme_attach_controller", 00:12:53.468 "req_id": 1 00:12:53.468 } 00:12:53.468 Got JSON-RPC error response 00:12:53.468 response: 00:12:53.468 { 00:12:53.468 "code": -5, 00:12:53.468 "message": "Input/output error" 00:12:53.468 } 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:53.468 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:53.736 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:53.737 19:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:54.022 request: 00:12:54.022 { 00:12:54.022 "name": "nvme0", 00:12:54.022 "trtype": "tcp", 00:12:54.022 "traddr": "10.0.0.2", 00:12:54.022 "adrfam": "ipv4", 00:12:54.022 "trsvcid": "4420", 00:12:54.022 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:54.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c", 00:12:54.022 "prchk_reftag": false, 00:12:54.022 "prchk_guard": false, 00:12:54.022 "hdgst": false, 00:12:54.022 "ddgst": false, 00:12:54.022 "dhchap_key": "key0", 00:12:54.022 "dhchap_ctrlr_key": "key1", 00:12:54.022 "method": "bdev_nvme_attach_controller", 00:12:54.022 "req_id": 1 00:12:54.022 } 00:12:54.022 Got JSON-RPC error response 00:12:54.022 response: 00:12:54.022 { 00:12:54.022 "code": -5, 00:12:54.022 "message": "Input/output error" 00:12:54.022 } 00:12:54.022 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:54.022 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:54.022 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:54.022 19:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:54.022 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:54.022 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:54.281 00:12:54.281 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:54.281 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.281 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:54.539 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.539 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.539 19:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.797 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:54.797 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:54.797 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69428 00:12:54.797 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69428 ']' 00:12:54.798 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69428 00:12:54.798 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:54.798 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:54.798 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69428 00:12:55.056 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:55.056 killing process with pid 69428 00:12:55.056 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:55.056 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69428' 00:12:55.056 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69428 00:12:55.056 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69428 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.315 rmmod nvme_tcp 00:12:55.315 rmmod nvme_fabrics 00:12:55.315 rmmod nvme_keyring 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72424 ']' 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72424 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72424 ']' 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72424 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72424 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:55.315 killing process with pid 72424 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72424' 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72424 00:12:55.315 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72424 00:12:55.573 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.573 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.573 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.573 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.573 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.573 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.573 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.573 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.834 19:50:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:55.834 19:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.AJJ /tmp/spdk.key-sha256.m1C /tmp/spdk.key-sha384.Tbq /tmp/spdk.key-sha512.ROp /tmp/spdk.key-sha512.Fii /tmp/spdk.key-sha384.h5l /tmp/spdk.key-sha256.J2i '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:55.834 00:12:55.834 real 2m48.747s 00:12:55.834 user 6m43.278s 00:12:55.834 sys 0m26.397s 00:12:55.834 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.834 19:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.834 ************************************ 00:12:55.834 END TEST nvmf_auth_target 00:12:55.834 ************************************ 00:12:55.834 19:50:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:55.834 19:50:49 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:12:55.834 19:50:49 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:55.834 19:50:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:55.834 19:50:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.834 19:50:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.834 ************************************ 00:12:55.834 START TEST nvmf_bdevio_no_huge 00:12:55.834 ************************************ 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:55.834 * Looking for test storage... 00:12:55.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.834 19:50:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:55.834 Cannot find device "nvmf_tgt_br" 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.834 Cannot find device "nvmf_tgt_br2" 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:55.834 Cannot find device "nvmf_tgt_br" 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:55.834 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:56.094 Cannot find device "nvmf_tgt_br2" 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:56.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:12:56.094 00:12:56.094 --- 10.0.0.2 ping statistics --- 00:12:56.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.094 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:56.094 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.094 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:12:56.094 00:12:56.094 --- 10.0.0.3 ping statistics --- 00:12:56.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.094 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:56.094 00:12:56.094 --- 10.0.0.1 ping statistics --- 00:12:56.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.094 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:56.094 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72751 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72751 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72751 ']' 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.353 19:50:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.353 [2024-07-15 19:50:50.410740] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:12:56.353 [2024-07-15 19:50:50.410850] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:56.353 [2024-07-15 19:50:50.563010] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.612 [2024-07-15 19:50:50.714817] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.612 [2024-07-15 19:50:50.714915] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.612 [2024-07-15 19:50:50.714948] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.612 [2024-07-15 19:50:50.714961] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.612 [2024-07-15 19:50:50.714973] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.612 [2024-07-15 19:50:50.715158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:56.612 [2024-07-15 19:50:50.715292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:56.612 [2024-07-15 19:50:50.715966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:56.612 [2024-07-15 19:50:50.715971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.612 [2024-07-15 19:50:50.721652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:57.181 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.181 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:12:57.181 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.181 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.181 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:57.440 [2024-07-15 19:50:51.460699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:57.440 Malloc0 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.440 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:57.441 [2024-07-15 19:50:51.504840] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:57.441 { 00:12:57.441 "params": { 00:12:57.441 "name": "Nvme$subsystem", 00:12:57.441 "trtype": "$TEST_TRANSPORT", 00:12:57.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:57.441 "adrfam": "ipv4", 00:12:57.441 "trsvcid": "$NVMF_PORT", 00:12:57.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:57.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:57.441 "hdgst": ${hdgst:-false}, 00:12:57.441 "ddgst": ${ddgst:-false} 00:12:57.441 }, 00:12:57.441 "method": "bdev_nvme_attach_controller" 00:12:57.441 } 00:12:57.441 EOF 00:12:57.441 )") 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:57.441 19:50:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:57.441 "params": { 00:12:57.441 "name": "Nvme1", 00:12:57.441 "trtype": "tcp", 00:12:57.441 "traddr": "10.0.0.2", 00:12:57.441 "adrfam": "ipv4", 00:12:57.441 "trsvcid": "4420", 00:12:57.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:57.441 "hdgst": false, 00:12:57.441 "ddgst": false 00:12:57.441 }, 00:12:57.441 "method": "bdev_nvme_attach_controller" 00:12:57.441 }' 00:12:57.441 [2024-07-15 19:50:51.561340] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:12:57.441 [2024-07-15 19:50:51.561430] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72788 ] 00:12:57.700 [2024-07-15 19:50:51.706605] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:57.700 [2024-07-15 19:50:51.851994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.700 [2024-07-15 19:50:51.852119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.700 [2024-07-15 19:50:51.852129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.700 [2024-07-15 19:50:51.866628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:57.960 I/O targets: 00:12:57.960 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:57.960 00:12:57.960 00:12:57.960 CUnit - A unit testing framework for C - Version 2.1-3 00:12:57.960 http://cunit.sourceforge.net/ 00:12:57.960 00:12:57.960 00:12:57.960 Suite: bdevio tests on: Nvme1n1 00:12:57.960 Test: blockdev write read block ...passed 00:12:57.960 Test: blockdev write zeroes read block ...passed 00:12:57.960 Test: blockdev write zeroes read no split ...passed 00:12:57.960 Test: blockdev write zeroes read split ...passed 00:12:57.960 Test: blockdev write zeroes read split partial ...passed 00:12:57.960 Test: blockdev reset ...[2024-07-15 19:50:52.071459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:57.960 [2024-07-15 19:50:52.071564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c0a10 (9): Bad file descriptor 00:12:57.960 [2024-07-15 19:50:52.082608] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:57.960 passed 00:12:57.960 Test: blockdev write read 8 blocks ...passed 00:12:57.960 Test: blockdev write read size > 128k ...passed 00:12:57.960 Test: blockdev write read invalid size ...passed 00:12:57.960 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:57.960 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:57.960 Test: blockdev write read max offset ...passed 00:12:57.960 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:57.960 Test: blockdev writev readv 8 blocks ...passed 00:12:57.960 Test: blockdev writev readv 30 x 1block ...passed 00:12:57.960 Test: blockdev writev readv block ...passed 00:12:57.960 Test: blockdev writev readv size > 128k ...passed 00:12:57.960 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:57.960 Test: blockdev comparev and writev ...[2024-07-15 19:50:52.091184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.960 [2024-07-15 19:50:52.091226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.091248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.960 [2024-07-15 19:50:52.091259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.091631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.960 [2024-07-15 19:50:52.091663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.091683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.960 [2024-07-15 19:50:52.091694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.092095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.960 [2024-07-15 19:50:52.092126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.092144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.960 [2024-07-15 19:50:52.092155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.092537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.960 [2024-07-15 19:50:52.092568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.092586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.960 [2024-07-15 19:50:52.092597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:57.960 passed 00:12:57.960 Test: blockdev nvme passthru rw ...passed 00:12:57.960 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:50:52.093690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:57.960 [2024-07-15 19:50:52.093715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.093836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:57.960 [2024-07-15 19:50:52.093852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.093961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:57.960 [2024-07-15 19:50:52.093984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:57.960 [2024-07-15 19:50:52.094092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:57.960 [2024-07-15 19:50:52.094107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:57.960 passed 00:12:57.960 Test: blockdev nvme admin passthru ...passed 00:12:57.960 Test: blockdev copy ...passed 00:12:57.960 00:12:57.960 Run Summary: Type Total Ran Passed Failed Inactive 00:12:57.960 suites 1 1 n/a 0 0 00:12:57.960 tests 23 23 23 0 0 00:12:57.960 asserts 152 152 152 0 n/a 00:12:57.960 00:12:57.960 Elapsed time = 0.156 seconds 00:12:58.219 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.219 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.219 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:58.219 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.219 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:58.219 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:58.219 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.477 rmmod nvme_tcp 00:12:58.477 rmmod nvme_fabrics 00:12:58.477 rmmod nvme_keyring 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72751 ']' 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72751 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72751 ']' 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72751 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72751 00:12:58.477 killing process with pid 72751 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72751' 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72751 00:12:58.477 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72751 00:12:59.045 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.045 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.045 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.045 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.045 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.045 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.045 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.045 19:50:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.045 19:50:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:59.045 00:12:59.045 real 0m3.123s 00:12:59.045 user 0m10.261s 00:12:59.045 sys 0m1.206s 00:12:59.045 19:50:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.045 19:50:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:59.045 ************************************ 00:12:59.045 END TEST nvmf_bdevio_no_huge 00:12:59.045 ************************************ 00:12:59.045 19:50:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:59.045 19:50:53 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:59.045 19:50:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:59.045 19:50:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.045 19:50:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.045 ************************************ 00:12:59.045 START TEST nvmf_tls 00:12:59.045 ************************************ 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:59.045 * Looking for test storage... 00:12:59.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.045 19:50:53 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:59.046 Cannot find device "nvmf_tgt_br" 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:59.046 Cannot find device "nvmf_tgt_br2" 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:59.046 Cannot find device "nvmf_tgt_br" 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:59.046 Cannot find device "nvmf_tgt_br2" 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:59.046 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:59.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:59.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:59.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:12:59.306 00:12:59.306 --- 10.0.0.2 ping statistics --- 00:12:59.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.306 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:59.306 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.306 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:12:59.306 00:12:59.306 --- 10.0.0.3 ping statistics --- 00:12:59.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.306 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:59.306 00:12:59.306 --- 10.0.0.1 ping statistics --- 00:12:59.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.306 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72966 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72966 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72966 ']' 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.306 19:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.306 [2024-07-15 19:50:53.534558] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:12:59.306 [2024-07-15 19:50:53.534625] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.565 [2024-07-15 19:50:53.672858] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.565 [2024-07-15 19:50:53.789557] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.565 [2024-07-15 19:50:53.789623] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.565 [2024-07-15 19:50:53.789653] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.565 [2024-07-15 19:50:53.789677] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.565 [2024-07-15 19:50:53.789685] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.565 [2024-07-15 19:50:53.789713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.501 19:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.501 19:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:00.502 19:50:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.502 19:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.502 19:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:00.502 19:50:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.502 19:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:00.502 19:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:00.760 true 00:13:00.760 19:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.760 19:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:01.019 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:01.019 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:01.020 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:01.279 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:01.279 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:01.537 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:01.537 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:01.537 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:01.796 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:01.796 19:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:02.054 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:02.054 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:02.054 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:02.054 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:02.312 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:02.312 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:02.312 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:02.570 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:02.570 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:02.829 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:02.829 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:02.829 19:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:03.101 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:03.101 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ximMFOM3yW 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.orcLRN9PyB 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ximMFOM3yW 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.orcLRN9PyB 00:13:03.372 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:03.939 19:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:03.939 [2024-07-15 19:50:58.157744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:04.198 19:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ximMFOM3yW 00:13:04.198 19:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ximMFOM3yW 00:13:04.198 19:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:04.198 [2024-07-15 19:50:58.438170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.457 19:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:04.457 19:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:04.716 [2024-07-15 19:50:58.882242] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:04.716 [2024-07-15 19:50:58.882527] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.716 19:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:04.976 malloc0 00:13:04.976 19:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:05.235 19:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ximMFOM3yW 00:13:05.493 [2024-07-15 19:50:59.726014] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:05.752 19:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ximMFOM3yW 00:13:15.796 Initializing NVMe Controllers 00:13:15.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:15.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:15.796 Initialization complete. Launching workers. 00:13:15.796 ======================================================== 00:13:15.796 Latency(us) 00:13:15.796 Device Information : IOPS MiB/s Average min max 00:13:15.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9296.78 36.32 6885.66 1315.87 11419.02 00:13:15.796 ======================================================== 00:13:15.796 Total : 9296.78 36.32 6885.66 1315.87 11419.02 00:13:15.796 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ximMFOM3yW 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ximMFOM3yW' 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73198 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73198 /var/tmp/bdevperf.sock 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73198 ']' 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:15.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.796 19:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.796 [2024-07-15 19:51:09.986159] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:15.796 [2024-07-15 19:51:09.986465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73198 ] 00:13:16.055 [2024-07-15 19:51:10.122520] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.055 [2024-07-15 19:51:10.258933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.315 [2024-07-15 19:51:10.318727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:16.315 19:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.315 19:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:16.315 19:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ximMFOM3yW 00:13:16.574 [2024-07-15 19:51:10.637682] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:16.574 [2024-07-15 19:51:10.637899] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:16.574 TLSTESTn1 00:13:16.574 19:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:16.833 Running I/O for 10 seconds... 00:13:26.820 00:13:26.820 Latency(us) 00:13:26.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.820 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:26.820 Verification LBA range: start 0x0 length 0x2000 00:13:26.820 TLSTESTn1 : 10.02 3741.39 14.61 0.00 0.00 34145.24 7119.59 38606.66 00:13:26.820 =================================================================================================================== 00:13:26.820 Total : 3741.39 14.61 0.00 0.00 34145.24 7119.59 38606.66 00:13:26.820 0 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73198 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73198 ']' 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73198 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73198 00:13:26.820 killing process with pid 73198 00:13:26.820 Received shutdown signal, test time was about 10.000000 seconds 00:13:26.820 00:13:26.820 Latency(us) 00:13:26.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.820 =================================================================================================================== 00:13:26.820 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73198' 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73198 00:13:26.820 [2024-07-15 19:51:20.926693] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:26.820 19:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73198 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.orcLRN9PyB 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.orcLRN9PyB 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.orcLRN9PyB 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.orcLRN9PyB' 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73323 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73323 /var/tmp/bdevperf.sock 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73323 ']' 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.079 19:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.079 [2024-07-15 19:51:21.198195] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:27.079 [2024-07-15 19:51:21.198435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73323 ] 00:13:27.342 [2024-07-15 19:51:21.332366] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.342 [2024-07-15 19:51:21.448827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.342 [2024-07-15 19:51:21.505011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:28.276 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.276 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:28.276 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.orcLRN9PyB 00:13:28.276 [2024-07-15 19:51:22.479151] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:28.276 [2024-07-15 19:51:22.479492] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:28.276 [2024-07-15 19:51:22.487806] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:28.276 [2024-07-15 19:51:22.488398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb73d0 (107): Transport endpoint is not connected 00:13:28.276 [2024-07-15 19:51:22.489376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb73d0 (9): Bad file descriptor 00:13:28.276 [2024-07-15 19:51:22.490372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:28.276 [2024-07-15 19:51:22.490396] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:28.276 [2024-07-15 19:51:22.490411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:28.276 request: 00:13:28.276 { 00:13:28.276 "name": "TLSTEST", 00:13:28.276 "trtype": "tcp", 00:13:28.276 "traddr": "10.0.0.2", 00:13:28.276 "adrfam": "ipv4", 00:13:28.276 "trsvcid": "4420", 00:13:28.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.276 "prchk_reftag": false, 00:13:28.276 "prchk_guard": false, 00:13:28.276 "hdgst": false, 00:13:28.276 "ddgst": false, 00:13:28.276 "psk": "/tmp/tmp.orcLRN9PyB", 00:13:28.276 "method": "bdev_nvme_attach_controller", 00:13:28.276 "req_id": 1 00:13:28.276 } 00:13:28.276 Got JSON-RPC error response 00:13:28.276 response: 00:13:28.276 { 00:13:28.276 "code": -5, 00:13:28.276 "message": "Input/output error" 00:13:28.276 } 00:13:28.276 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73323 00:13:28.276 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73323 ']' 00:13:28.276 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73323 00:13:28.276 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:28.276 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.276 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73323 00:13:28.534 killing process with pid 73323 00:13:28.534 Received shutdown signal, test time was about 10.000000 seconds 00:13:28.534 00:13:28.534 Latency(us) 00:13:28.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.534 =================================================================================================================== 00:13:28.534 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:28.534 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:28.534 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:28.534 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73323' 00:13:28.534 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73323 00:13:28.534 [2024-07-15 19:51:22.535466] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:28.534 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73323 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ximMFOM3yW 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ximMFOM3yW 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ximMFOM3yW 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ximMFOM3yW' 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73346 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73346 /var/tmp/bdevperf.sock 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73346 ']' 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.847 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.848 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.848 19:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.848 [2024-07-15 19:51:22.852477] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:28.848 [2024-07-15 19:51:22.852766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73346 ] 00:13:28.848 [2024-07-15 19:51:22.995556] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.119 [2024-07-15 19:51:23.126755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.119 [2024-07-15 19:51:23.186468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:29.687 19:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.687 19:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:29.687 19:51:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ximMFOM3yW 00:13:29.944 [2024-07-15 19:51:24.066811] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:29.944 [2024-07-15 19:51:24.066933] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:29.945 [2024-07-15 19:51:24.078040] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:29.945 [2024-07-15 19:51:24.078082] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:29.945 [2024-07-15 19:51:24.078141] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:29.945 [2024-07-15 19:51:24.078682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc03d0 (107): Transport endpoint is not connected 00:13:29.945 [2024-07-15 19:51:24.079667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc03d0 (9): Bad file descriptor 00:13:29.945 [2024-07-15 19:51:24.080663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:29.945 [2024-07-15 19:51:24.080696] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:29.945 [2024-07-15 19:51:24.080711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:29.945 request: 00:13:29.945 { 00:13:29.945 "name": "TLSTEST", 00:13:29.945 "trtype": "tcp", 00:13:29.945 "traddr": "10.0.0.2", 00:13:29.945 "adrfam": "ipv4", 00:13:29.945 "trsvcid": "4420", 00:13:29.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:29.945 "prchk_reftag": false, 00:13:29.945 "prchk_guard": false, 00:13:29.945 "hdgst": false, 00:13:29.945 "ddgst": false, 00:13:29.945 "psk": "/tmp/tmp.ximMFOM3yW", 00:13:29.945 "method": "bdev_nvme_attach_controller", 00:13:29.945 "req_id": 1 00:13:29.945 } 00:13:29.945 Got JSON-RPC error response 00:13:29.945 response: 00:13:29.945 { 00:13:29.945 "code": -5, 00:13:29.945 "message": "Input/output error" 00:13:29.945 } 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73346 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73346 ']' 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73346 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73346 00:13:29.945 killing process with pid 73346 00:13:29.945 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.945 00:13:29.945 Latency(us) 00:13:29.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.945 =================================================================================================================== 00:13:29.945 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73346' 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73346 00:13:29.945 [2024-07-15 19:51:24.133194] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:29.945 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73346 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ximMFOM3yW 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ximMFOM3yW 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:30.203 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ximMFOM3yW 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ximMFOM3yW' 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73374 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73374 /var/tmp/bdevperf.sock 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73374 ']' 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.204 19:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.204 [2024-07-15 19:51:24.419452] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:30.204 [2024-07-15 19:51:24.421427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73374 ] 00:13:30.462 [2024-07-15 19:51:24.556489] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.462 [2024-07-15 19:51:24.673115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.721 [2024-07-15 19:51:24.727207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:31.288 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.288 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:31.288 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ximMFOM3yW 00:13:31.548 [2024-07-15 19:51:25.681706] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:31.548 [2024-07-15 19:51:25.682154] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:31.548 [2024-07-15 19:51:25.688184] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:31.548 [2024-07-15 19:51:25.688226] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:31.548 [2024-07-15 19:51:25.688289] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:31.548 [2024-07-15 19:51:25.688935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147f3d0 (107): Transport endpoint is not connected 00:13:31.548 [2024-07-15 19:51:25.689918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147f3d0 (9): Bad file descriptor 00:13:31.548 [2024-07-15 19:51:25.690915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:31.548 [2024-07-15 19:51:25.690942] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:31.548 [2024-07-15 19:51:25.690957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:31.548 request: 00:13:31.548 { 00:13:31.548 "name": "TLSTEST", 00:13:31.548 "trtype": "tcp", 00:13:31.548 "traddr": "10.0.0.2", 00:13:31.548 "adrfam": "ipv4", 00:13:31.548 "trsvcid": "4420", 00:13:31.548 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:31.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:31.548 "prchk_reftag": false, 00:13:31.548 "prchk_guard": false, 00:13:31.548 "hdgst": false, 00:13:31.548 "ddgst": false, 00:13:31.548 "psk": "/tmp/tmp.ximMFOM3yW", 00:13:31.548 "method": "bdev_nvme_attach_controller", 00:13:31.548 "req_id": 1 00:13:31.548 } 00:13:31.548 Got JSON-RPC error response 00:13:31.548 response: 00:13:31.548 { 00:13:31.548 "code": -5, 00:13:31.548 "message": "Input/output error" 00:13:31.548 } 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73374 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73374 ']' 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73374 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73374 00:13:31.548 killing process with pid 73374 00:13:31.548 Received shutdown signal, test time was about 10.000000 seconds 00:13:31.548 00:13:31.548 Latency(us) 00:13:31.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.548 =================================================================================================================== 00:13:31.548 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73374' 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73374 00:13:31.548 [2024-07-15 19:51:25.740076] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:31.548 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73374 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:31.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73401 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73401 /var/tmp/bdevperf.sock 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73401 ']' 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.807 19:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.807 [2024-07-15 19:51:26.023296] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:31.807 [2024-07-15 19:51:26.023807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73401 ] 00:13:32.066 [2024-07-15 19:51:26.159430] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.066 [2024-07-15 19:51:26.265068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.325 [2024-07-15 19:51:26.324779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:32.894 19:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.894 19:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:32.894 19:51:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:33.153 [2024-07-15 19:51:27.268612] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:33.153 [2024-07-15 19:51:27.271515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adada0 (9): Bad file descriptor 00:13:33.153 [2024-07-15 19:51:27.272511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:33.153 [2024-07-15 19:51:27.272675] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:33.153 [2024-07-15 19:51:27.272797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:33.153 request: 00:13:33.153 { 00:13:33.153 "name": "TLSTEST", 00:13:33.153 "trtype": "tcp", 00:13:33.153 "traddr": "10.0.0.2", 00:13:33.153 "adrfam": "ipv4", 00:13:33.153 "trsvcid": "4420", 00:13:33.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:33.153 "prchk_reftag": false, 00:13:33.153 "prchk_guard": false, 00:13:33.153 "hdgst": false, 00:13:33.153 "ddgst": false, 00:13:33.153 "method": "bdev_nvme_attach_controller", 00:13:33.153 "req_id": 1 00:13:33.153 } 00:13:33.153 Got JSON-RPC error response 00:13:33.153 response: 00:13:33.153 { 00:13:33.153 "code": -5, 00:13:33.153 "message": "Input/output error" 00:13:33.153 } 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73401 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73401 ']' 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73401 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73401 00:13:33.153 killing process with pid 73401 00:13:33.153 Received shutdown signal, test time was about 10.000000 seconds 00:13:33.153 00:13:33.153 Latency(us) 00:13:33.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.153 =================================================================================================================== 00:13:33.153 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73401' 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73401 00:13:33.153 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73401 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72966 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72966 ']' 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72966 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72966 00:13:33.412 killing process with pid 72966 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72966' 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72966 00:13:33.412 [2024-07-15 19:51:27.574813] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:33.412 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72966 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.X4p5aVIgFR 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.X4p5aVIgFR 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73439 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73439 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73439 ']' 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:33.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.671 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.672 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.672 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.672 19:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.930 [2024-07-15 19:51:27.928324] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:33.930 [2024-07-15 19:51:27.928465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.930 [2024-07-15 19:51:28.066812] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.194 [2024-07-15 19:51:28.177849] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.194 [2024-07-15 19:51:28.177971] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.194 [2024-07-15 19:51:28.177983] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.194 [2024-07-15 19:51:28.177991] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.194 [2024-07-15 19:51:28.177999] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.194 [2024-07-15 19:51:28.178027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.194 [2024-07-15 19:51:28.235916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:34.768 19:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.768 19:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:34.768 19:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:34.768 19:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:34.768 19:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.768 19:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.768 19:51:28 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.X4p5aVIgFR 00:13:34.768 19:51:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.X4p5aVIgFR 00:13:34.768 19:51:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:35.026 [2024-07-15 19:51:29.140607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.026 19:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:35.284 19:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:35.542 [2024-07-15 19:51:29.692755] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:35.542 [2024-07-15 19:51:29.692975] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.542 19:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:35.802 malloc0 00:13:35.802 19:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:36.060 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X4p5aVIgFR 00:13:36.319 [2024-07-15 19:51:30.392351] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X4p5aVIgFR 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.X4p5aVIgFR' 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73498 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73498 /var/tmp/bdevperf.sock 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73498 ']' 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.319 19:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.319 [2024-07-15 19:51:30.469942] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:36.319 [2024-07-15 19:51:30.470216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73498 ] 00:13:36.578 [2024-07-15 19:51:30.609252] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.578 [2024-07-15 19:51:30.739625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.578 [2024-07-15 19:51:30.799711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:37.146 19:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:37.146 19:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:37.146 19:51:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X4p5aVIgFR 00:13:37.404 [2024-07-15 19:51:31.621358] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:37.404 [2024-07-15 19:51:31.621480] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:37.663 TLSTESTn1 00:13:37.663 19:51:31 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:37.663 Running I/O for 10 seconds... 00:13:47.635 00:13:47.635 Latency(us) 00:13:47.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.635 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:47.635 Verification LBA range: start 0x0 length 0x2000 00:13:47.635 TLSTESTn1 : 10.02 3746.00 14.63 0.00 0.00 34089.59 5510.98 32887.16 00:13:47.635 =================================================================================================================== 00:13:47.635 Total : 3746.00 14.63 0.00 0.00 34089.59 5510.98 32887.16 00:13:47.635 0 00:13:47.635 19:51:41 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.635 19:51:41 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73498 00:13:47.635 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73498 ']' 00:13:47.635 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73498 00:13:47.635 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:47.635 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.635 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73498 00:13:47.894 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:47.894 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:47.894 killing process with pid 73498 00:13:47.894 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73498' 00:13:47.894 Received shutdown signal, test time was about 10.000000 seconds 00:13:47.894 00:13:47.894 Latency(us) 00:13:47.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.894 =================================================================================================================== 00:13:47.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:47.894 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73498 00:13:47.894 [2024-07-15 19:51:41.894396] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:47.894 19:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73498 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.X4p5aVIgFR 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X4p5aVIgFR 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X4p5aVIgFR 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:47.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X4p5aVIgFR 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.X4p5aVIgFR' 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73628 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73628 /var/tmp/bdevperf.sock 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73628 ']' 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.894 19:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.162 [2024-07-15 19:51:42.186168] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:48.162 [2024-07-15 19:51:42.186576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73628 ] 00:13:48.162 [2024-07-15 19:51:42.319888] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.423 [2024-07-15 19:51:42.435261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.423 [2024-07-15 19:51:42.490184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.989 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.989 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:48.989 19:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X4p5aVIgFR 00:13:49.247 [2024-07-15 19:51:43.393165] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:49.247 [2024-07-15 19:51:43.393505] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:49.247 [2024-07-15 19:51:43.393721] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.X4p5aVIgFR 00:13:49.247 request: 00:13:49.247 { 00:13:49.247 "name": "TLSTEST", 00:13:49.247 "trtype": "tcp", 00:13:49.247 "traddr": "10.0.0.2", 00:13:49.247 "adrfam": "ipv4", 00:13:49.247 "trsvcid": "4420", 00:13:49.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:49.248 "prchk_reftag": false, 00:13:49.248 "prchk_guard": false, 00:13:49.248 "hdgst": false, 00:13:49.248 "ddgst": false, 00:13:49.248 "psk": "/tmp/tmp.X4p5aVIgFR", 00:13:49.248 "method": "bdev_nvme_attach_controller", 00:13:49.248 "req_id": 1 00:13:49.248 } 00:13:49.248 Got JSON-RPC error response 00:13:49.248 response: 00:13:49.248 { 00:13:49.248 "code": -1, 00:13:49.248 "message": "Operation not permitted" 00:13:49.248 } 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73628 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73628 ']' 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73628 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73628 00:13:49.248 killing process with pid 73628 00:13:49.248 Received shutdown signal, test time was about 10.000000 seconds 00:13:49.248 00:13:49.248 Latency(us) 00:13:49.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.248 =================================================================================================================== 00:13:49.248 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73628' 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73628 00:13:49.248 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73628 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73439 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73439 ']' 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73439 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73439 00:13:49.506 killing process with pid 73439 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73439' 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73439 00:13:49.506 [2024-07-15 19:51:43.712233] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:49.506 19:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73439 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73661 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73661 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73661 ']' 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.074 19:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.074 [2024-07-15 19:51:44.097782] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:50.074 [2024-07-15 19:51:44.098114] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.074 [2024-07-15 19:51:44.233010] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.332 [2024-07-15 19:51:44.378988] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.332 [2024-07-15 19:51:44.379310] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.332 [2024-07-15 19:51:44.379434] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.332 [2024-07-15 19:51:44.379448] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.332 [2024-07-15 19:51:44.379455] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.332 [2024-07-15 19:51:44.379494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.332 [2024-07-15 19:51:44.451222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.X4p5aVIgFR 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.X4p5aVIgFR 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.X4p5aVIgFR 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.X4p5aVIgFR 00:13:50.899 19:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:51.158 [2024-07-15 19:51:45.314310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.158 19:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:51.416 19:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:51.675 [2024-07-15 19:51:45.822387] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:51.675 [2024-07-15 19:51:45.822669] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.675 19:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:51.933 malloc0 00:13:51.933 19:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:52.191 19:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X4p5aVIgFR 00:13:52.449 [2024-07-15 19:51:46.530061] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:52.449 [2024-07-15 19:51:46.530139] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:52.449 [2024-07-15 19:51:46.530187] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:52.449 request: 00:13:52.449 { 00:13:52.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.449 "host": "nqn.2016-06.io.spdk:host1", 00:13:52.449 "psk": "/tmp/tmp.X4p5aVIgFR", 00:13:52.449 "method": "nvmf_subsystem_add_host", 00:13:52.449 "req_id": 1 00:13:52.449 } 00:13:52.449 Got JSON-RPC error response 00:13:52.449 response: 00:13:52.449 { 00:13:52.449 "code": -32603, 00:13:52.449 "message": "Internal error" 00:13:52.449 } 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73661 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73661 ']' 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73661 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73661 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:52.449 killing process with pid 73661 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73661' 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73661 00:13:52.449 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73661 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.X4p5aVIgFR 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73729 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73729 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73729 ']' 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.707 19:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.003 [2024-07-15 19:51:46.969800] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:53.003 [2024-07-15 19:51:46.970211] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.003 [2024-07-15 19:51:47.110610] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.262 [2024-07-15 19:51:47.258642] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.262 [2024-07-15 19:51:47.258969] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.262 [2024-07-15 19:51:47.259092] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.262 [2024-07-15 19:51:47.259106] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.262 [2024-07-15 19:51:47.259114] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.262 [2024-07-15 19:51:47.259155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.262 [2024-07-15 19:51:47.331105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:53.829 19:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.829 19:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:53.829 19:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.829 19:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.829 19:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.829 19:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.829 19:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.X4p5aVIgFR 00:13:53.829 19:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.X4p5aVIgFR 00:13:53.829 19:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:54.086 [2024-07-15 19:51:48.286264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.086 19:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:54.343 19:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:54.601 [2024-07-15 19:51:48.822376] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:54.601 [2024-07-15 19:51:48.822691] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.601 19:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:54.859 malloc0 00:13:54.859 19:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:55.118 19:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X4p5aVIgFR 00:13:55.377 [2024-07-15 19:51:49.561259] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73784 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:55.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73784 /var/tmp/bdevperf.sock 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73784 ']' 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.377 19:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.636 [2024-07-15 19:51:49.625150] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:55.636 [2024-07-15 19:51:49.625520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73784 ] 00:13:55.636 [2024-07-15 19:51:49.758931] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.636 [2024-07-15 19:51:49.875900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.895 [2024-07-15 19:51:49.929743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.462 19:51:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.462 19:51:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:56.462 19:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X4p5aVIgFR 00:13:56.721 [2024-07-15 19:51:50.796343] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:56.721 [2024-07-15 19:51:50.796490] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:56.721 TLSTESTn1 00:13:56.721 19:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:56.980 19:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:56.980 "subsystems": [ 00:13:56.980 { 00:13:56.980 "subsystem": "keyring", 00:13:56.980 "config": [] 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "subsystem": "iobuf", 00:13:56.980 "config": [ 00:13:56.980 { 00:13:56.980 "method": "iobuf_set_options", 00:13:56.980 "params": { 00:13:56.980 "small_pool_count": 8192, 00:13:56.980 "large_pool_count": 1024, 00:13:56.980 "small_bufsize": 8192, 00:13:56.980 "large_bufsize": 135168 00:13:56.980 } 00:13:56.980 } 00:13:56.980 ] 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "subsystem": "sock", 00:13:56.980 "config": [ 00:13:56.980 { 00:13:56.980 "method": "sock_set_default_impl", 00:13:56.980 "params": { 00:13:56.980 "impl_name": "uring" 00:13:56.980 } 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "method": "sock_impl_set_options", 00:13:56.980 "params": { 00:13:56.980 "impl_name": "ssl", 00:13:56.980 "recv_buf_size": 4096, 00:13:56.980 "send_buf_size": 4096, 00:13:56.980 "enable_recv_pipe": true, 00:13:56.980 "enable_quickack": false, 00:13:56.980 "enable_placement_id": 0, 00:13:56.980 "enable_zerocopy_send_server": true, 00:13:56.980 "enable_zerocopy_send_client": false, 00:13:56.980 "zerocopy_threshold": 0, 00:13:56.980 "tls_version": 0, 00:13:56.980 "enable_ktls": false 00:13:56.980 } 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "method": "sock_impl_set_options", 00:13:56.980 "params": { 00:13:56.980 "impl_name": "posix", 00:13:56.980 "recv_buf_size": 2097152, 00:13:56.980 "send_buf_size": 2097152, 00:13:56.980 "enable_recv_pipe": true, 00:13:56.980 "enable_quickack": false, 00:13:56.980 "enable_placement_id": 0, 00:13:56.980 "enable_zerocopy_send_server": true, 00:13:56.980 "enable_zerocopy_send_client": false, 00:13:56.980 "zerocopy_threshold": 0, 00:13:56.980 "tls_version": 0, 00:13:56.980 "enable_ktls": false 00:13:56.980 } 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "method": "sock_impl_set_options", 00:13:56.980 "params": { 00:13:56.980 "impl_name": "uring", 00:13:56.980 "recv_buf_size": 2097152, 00:13:56.980 "send_buf_size": 2097152, 00:13:56.980 "enable_recv_pipe": true, 00:13:56.980 "enable_quickack": false, 00:13:56.980 "enable_placement_id": 0, 00:13:56.980 "enable_zerocopy_send_server": false, 00:13:56.980 "enable_zerocopy_send_client": false, 00:13:56.980 "zerocopy_threshold": 0, 00:13:56.980 "tls_version": 0, 00:13:56.980 "enable_ktls": false 00:13:56.980 } 00:13:56.980 } 00:13:56.980 ] 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "subsystem": "vmd", 00:13:56.980 "config": [] 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "subsystem": "accel", 00:13:56.980 "config": [ 00:13:56.980 { 00:13:56.980 "method": "accel_set_options", 00:13:56.980 "params": { 00:13:56.980 "small_cache_size": 128, 00:13:56.980 "large_cache_size": 16, 00:13:56.980 "task_count": 2048, 00:13:56.980 "sequence_count": 2048, 00:13:56.980 "buf_count": 2048 00:13:56.980 } 00:13:56.980 } 00:13:56.980 ] 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "subsystem": "bdev", 00:13:56.980 "config": [ 00:13:56.980 { 00:13:56.980 "method": "bdev_set_options", 00:13:56.980 "params": { 00:13:56.980 "bdev_io_pool_size": 65535, 00:13:56.980 "bdev_io_cache_size": 256, 00:13:56.980 "bdev_auto_examine": true, 00:13:56.980 "iobuf_small_cache_size": 128, 00:13:56.980 "iobuf_large_cache_size": 16 00:13:56.980 } 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "method": "bdev_raid_set_options", 00:13:56.980 "params": { 00:13:56.980 "process_window_size_kb": 1024 00:13:56.980 } 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "method": "bdev_iscsi_set_options", 00:13:56.980 "params": { 00:13:56.980 "timeout_sec": 30 00:13:56.980 } 00:13:56.980 }, 00:13:56.980 { 00:13:56.980 "method": "bdev_nvme_set_options", 00:13:56.980 "params": { 00:13:56.980 "action_on_timeout": "none", 00:13:56.980 "timeout_us": 0, 00:13:56.980 "timeout_admin_us": 0, 00:13:56.980 "keep_alive_timeout_ms": 10000, 00:13:56.980 "arbitration_burst": 0, 00:13:56.980 "low_priority_weight": 0, 00:13:56.980 "medium_priority_weight": 0, 00:13:56.980 "high_priority_weight": 0, 00:13:56.980 "nvme_adminq_poll_period_us": 10000, 00:13:56.980 "nvme_ioq_poll_period_us": 0, 00:13:56.980 "io_queue_requests": 0, 00:13:56.980 "delay_cmd_submit": true, 00:13:56.980 "transport_retry_count": 4, 00:13:56.980 "bdev_retry_count": 3, 00:13:56.980 "transport_ack_timeout": 0, 00:13:56.980 "ctrlr_loss_timeout_sec": 0, 00:13:56.980 "reconnect_delay_sec": 0, 00:13:56.980 "fast_io_fail_timeout_sec": 0, 00:13:56.981 "disable_auto_failback": false, 00:13:56.981 "generate_uuids": false, 00:13:56.981 "transport_tos": 0, 00:13:56.981 "nvme_error_stat": false, 00:13:56.981 "rdma_srq_size": 0, 00:13:56.981 "io_path_stat": false, 00:13:56.981 "allow_accel_sequence": false, 00:13:56.981 "rdma_max_cq_size": 0, 00:13:56.981 "rdma_cm_event_timeout_ms": 0, 00:13:56.981 "dhchap_digests": [ 00:13:56.981 "sha256", 00:13:56.981 "sha384", 00:13:56.981 "sha512" 00:13:56.981 ], 00:13:56.981 "dhchap_dhgroups": [ 00:13:56.981 "null", 00:13:56.981 "ffdhe2048", 00:13:56.981 "ffdhe3072", 00:13:56.981 "ffdhe4096", 00:13:56.981 "ffdhe6144", 00:13:56.981 "ffdhe8192" 00:13:56.981 ] 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "bdev_nvme_set_hotplug", 00:13:56.981 "params": { 00:13:56.981 "period_us": 100000, 00:13:56.981 "enable": false 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "bdev_malloc_create", 00:13:56.981 "params": { 00:13:56.981 "name": "malloc0", 00:13:56.981 "num_blocks": 8192, 00:13:56.981 "block_size": 4096, 00:13:56.981 "physical_block_size": 4096, 00:13:56.981 "uuid": "bdbdfc2f-0434-436a-bbac-6849b739f0fb", 00:13:56.981 "optimal_io_boundary": 0 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "bdev_wait_for_examine" 00:13:56.981 } 00:13:56.981 ] 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "subsystem": "nbd", 00:13:56.981 "config": [] 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "subsystem": "scheduler", 00:13:56.981 "config": [ 00:13:56.981 { 00:13:56.981 "method": "framework_set_scheduler", 00:13:56.981 "params": { 00:13:56.981 "name": "static" 00:13:56.981 } 00:13:56.981 } 00:13:56.981 ] 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "subsystem": "nvmf", 00:13:56.981 "config": [ 00:13:56.981 { 00:13:56.981 "method": "nvmf_set_config", 00:13:56.981 "params": { 00:13:56.981 "discovery_filter": "match_any", 00:13:56.981 "admin_cmd_passthru": { 00:13:56.981 "identify_ctrlr": false 00:13:56.981 } 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "nvmf_set_max_subsystems", 00:13:56.981 "params": { 00:13:56.981 "max_subsystems": 1024 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "nvmf_set_crdt", 00:13:56.981 "params": { 00:13:56.981 "crdt1": 0, 00:13:56.981 "crdt2": 0, 00:13:56.981 "crdt3": 0 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "nvmf_create_transport", 00:13:56.981 "params": { 00:13:56.981 "trtype": "TCP", 00:13:56.981 "max_queue_depth": 128, 00:13:56.981 "max_io_qpairs_per_ctrlr": 127, 00:13:56.981 "in_capsule_data_size": 4096, 00:13:56.981 "max_io_size": 131072, 00:13:56.981 "io_unit_size": 131072, 00:13:56.981 "max_aq_depth": 128, 00:13:56.981 "num_shared_buffers": 511, 00:13:56.981 "buf_cache_size": 4294967295, 00:13:56.981 "dif_insert_or_strip": false, 00:13:56.981 "zcopy": false, 00:13:56.981 "c2h_success": false, 00:13:56.981 "sock_priority": 0, 00:13:56.981 "abort_timeout_sec": 1, 00:13:56.981 "ack_timeout": 0, 00:13:56.981 "data_wr_pool_size": 0 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "nvmf_create_subsystem", 00:13:56.981 "params": { 00:13:56.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.981 "allow_any_host": false, 00:13:56.981 "serial_number": "SPDK00000000000001", 00:13:56.981 "model_number": "SPDK bdev Controller", 00:13:56.981 "max_namespaces": 10, 00:13:56.981 "min_cntlid": 1, 00:13:56.981 "max_cntlid": 65519, 00:13:56.981 "ana_reporting": false 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "nvmf_subsystem_add_host", 00:13:56.981 "params": { 00:13:56.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.981 "host": "nqn.2016-06.io.spdk:host1", 00:13:56.981 "psk": "/tmp/tmp.X4p5aVIgFR" 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "nvmf_subsystem_add_ns", 00:13:56.981 "params": { 00:13:56.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.981 "namespace": { 00:13:56.981 "nsid": 1, 00:13:56.981 "bdev_name": "malloc0", 00:13:56.981 "nguid": "BDBDFC2F0434436ABBAC6849B739F0FB", 00:13:56.981 "uuid": "bdbdfc2f-0434-436a-bbac-6849b739f0fb", 00:13:56.981 "no_auto_visible": false 00:13:56.981 } 00:13:56.981 } 00:13:56.981 }, 00:13:56.981 { 00:13:56.981 "method": "nvmf_subsystem_add_listener", 00:13:56.981 "params": { 00:13:56.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.981 "listen_address": { 00:13:56.981 "trtype": "TCP", 00:13:56.981 "adrfam": "IPv4", 00:13:56.981 "traddr": "10.0.0.2", 00:13:56.981 "trsvcid": "4420" 00:13:56.981 }, 00:13:56.981 "secure_channel": true 00:13:56.981 } 00:13:56.981 } 00:13:56.981 ] 00:13:56.981 } 00:13:56.981 ] 00:13:56.981 }' 00:13:56.981 19:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:57.549 19:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:57.549 "subsystems": [ 00:13:57.549 { 00:13:57.549 "subsystem": "keyring", 00:13:57.549 "config": [] 00:13:57.549 }, 00:13:57.549 { 00:13:57.549 "subsystem": "iobuf", 00:13:57.549 "config": [ 00:13:57.549 { 00:13:57.549 "method": "iobuf_set_options", 00:13:57.549 "params": { 00:13:57.549 "small_pool_count": 8192, 00:13:57.549 "large_pool_count": 1024, 00:13:57.549 "small_bufsize": 8192, 00:13:57.549 "large_bufsize": 135168 00:13:57.549 } 00:13:57.549 } 00:13:57.549 ] 00:13:57.549 }, 00:13:57.549 { 00:13:57.549 "subsystem": "sock", 00:13:57.549 "config": [ 00:13:57.549 { 00:13:57.549 "method": "sock_set_default_impl", 00:13:57.549 "params": { 00:13:57.549 "impl_name": "uring" 00:13:57.549 } 00:13:57.549 }, 00:13:57.549 { 00:13:57.549 "method": "sock_impl_set_options", 00:13:57.549 "params": { 00:13:57.549 "impl_name": "ssl", 00:13:57.549 "recv_buf_size": 4096, 00:13:57.549 "send_buf_size": 4096, 00:13:57.549 "enable_recv_pipe": true, 00:13:57.549 "enable_quickack": false, 00:13:57.549 "enable_placement_id": 0, 00:13:57.549 "enable_zerocopy_send_server": true, 00:13:57.549 "enable_zerocopy_send_client": false, 00:13:57.549 "zerocopy_threshold": 0, 00:13:57.549 "tls_version": 0, 00:13:57.549 "enable_ktls": false 00:13:57.549 } 00:13:57.549 }, 00:13:57.549 { 00:13:57.549 "method": "sock_impl_set_options", 00:13:57.549 "params": { 00:13:57.549 "impl_name": "posix", 00:13:57.549 "recv_buf_size": 2097152, 00:13:57.549 "send_buf_size": 2097152, 00:13:57.549 "enable_recv_pipe": true, 00:13:57.549 "enable_quickack": false, 00:13:57.549 "enable_placement_id": 0, 00:13:57.549 "enable_zerocopy_send_server": true, 00:13:57.549 "enable_zerocopy_send_client": false, 00:13:57.550 "zerocopy_threshold": 0, 00:13:57.550 "tls_version": 0, 00:13:57.550 "enable_ktls": false 00:13:57.550 } 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "method": "sock_impl_set_options", 00:13:57.550 "params": { 00:13:57.550 "impl_name": "uring", 00:13:57.550 "recv_buf_size": 2097152, 00:13:57.550 "send_buf_size": 2097152, 00:13:57.550 "enable_recv_pipe": true, 00:13:57.550 "enable_quickack": false, 00:13:57.550 "enable_placement_id": 0, 00:13:57.550 "enable_zerocopy_send_server": false, 00:13:57.550 "enable_zerocopy_send_client": false, 00:13:57.550 "zerocopy_threshold": 0, 00:13:57.550 "tls_version": 0, 00:13:57.550 "enable_ktls": false 00:13:57.550 } 00:13:57.550 } 00:13:57.550 ] 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "subsystem": "vmd", 00:13:57.550 "config": [] 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "subsystem": "accel", 00:13:57.550 "config": [ 00:13:57.550 { 00:13:57.550 "method": "accel_set_options", 00:13:57.550 "params": { 00:13:57.550 "small_cache_size": 128, 00:13:57.550 "large_cache_size": 16, 00:13:57.550 "task_count": 2048, 00:13:57.550 "sequence_count": 2048, 00:13:57.550 "buf_count": 2048 00:13:57.550 } 00:13:57.550 } 00:13:57.550 ] 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "subsystem": "bdev", 00:13:57.550 "config": [ 00:13:57.550 { 00:13:57.550 "method": "bdev_set_options", 00:13:57.550 "params": { 00:13:57.550 "bdev_io_pool_size": 65535, 00:13:57.550 "bdev_io_cache_size": 256, 00:13:57.550 "bdev_auto_examine": true, 00:13:57.550 "iobuf_small_cache_size": 128, 00:13:57.550 "iobuf_large_cache_size": 16 00:13:57.550 } 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "method": "bdev_raid_set_options", 00:13:57.550 "params": { 00:13:57.550 "process_window_size_kb": 1024 00:13:57.550 } 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "method": "bdev_iscsi_set_options", 00:13:57.550 "params": { 00:13:57.550 "timeout_sec": 30 00:13:57.550 } 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "method": "bdev_nvme_set_options", 00:13:57.550 "params": { 00:13:57.550 "action_on_timeout": "none", 00:13:57.550 "timeout_us": 0, 00:13:57.550 "timeout_admin_us": 0, 00:13:57.550 "keep_alive_timeout_ms": 10000, 00:13:57.550 "arbitration_burst": 0, 00:13:57.550 "low_priority_weight": 0, 00:13:57.550 "medium_priority_weight": 0, 00:13:57.550 "high_priority_weight": 0, 00:13:57.550 "nvme_adminq_poll_period_us": 10000, 00:13:57.550 "nvme_ioq_poll_period_us": 0, 00:13:57.550 "io_queue_requests": 512, 00:13:57.550 "delay_cmd_submit": true, 00:13:57.550 "transport_retry_count": 4, 00:13:57.550 "bdev_retry_count": 3, 00:13:57.550 "transport_ack_timeout": 0, 00:13:57.550 "ctrlr_loss_timeout_sec": 0, 00:13:57.550 "reconnect_delay_sec": 0, 00:13:57.550 "fast_io_fail_timeout_sec": 0, 00:13:57.550 "disable_auto_failback": false, 00:13:57.550 "generate_uuids": false, 00:13:57.550 "transport_tos": 0, 00:13:57.550 "nvme_error_stat": false, 00:13:57.550 "rdma_srq_size": 0, 00:13:57.550 "io_path_stat": false, 00:13:57.550 "allow_accel_sequence": false, 00:13:57.550 "rdma_max_cq_size": 0, 00:13:57.550 "rdma_cm_event_timeout_ms": 0, 00:13:57.550 "dhchap_digests": [ 00:13:57.550 "sha256", 00:13:57.550 "sha384", 00:13:57.550 "sha512" 00:13:57.550 ], 00:13:57.550 "dhchap_dhgroups": [ 00:13:57.550 "null", 00:13:57.550 "ffdhe2048", 00:13:57.550 "ffdhe3072", 00:13:57.550 "ffdhe4096", 00:13:57.550 "ffdhe6144", 00:13:57.550 "ffdhe8192" 00:13:57.550 ] 00:13:57.550 } 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "method": "bdev_nvme_attach_controller", 00:13:57.550 "params": { 00:13:57.550 "name": "TLSTEST", 00:13:57.550 "trtype": "TCP", 00:13:57.550 "adrfam": "IPv4", 00:13:57.550 "traddr": "10.0.0.2", 00:13:57.550 "trsvcid": "4420", 00:13:57.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.550 "prchk_reftag": false, 00:13:57.550 "prchk_guard": false, 00:13:57.550 "ctrlr_loss_timeout_sec": 0, 00:13:57.550 "reconnect_delay_sec": 0, 00:13:57.550 "fast_io_fail_timeout_sec": 0, 00:13:57.550 "psk": "/tmp/tmp.X4p5aVIgFR", 00:13:57.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:57.550 "hdgst": false, 00:13:57.550 "ddgst": false 00:13:57.550 } 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "method": "bdev_nvme_set_hotplug", 00:13:57.550 "params": { 00:13:57.550 "period_us": 100000, 00:13:57.550 "enable": false 00:13:57.550 } 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "method": "bdev_wait_for_examine" 00:13:57.550 } 00:13:57.550 ] 00:13:57.550 }, 00:13:57.550 { 00:13:57.550 "subsystem": "nbd", 00:13:57.550 "config": [] 00:13:57.550 } 00:13:57.550 ] 00:13:57.550 }' 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73784 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73784 ']' 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73784 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73784 00:13:57.550 killing process with pid 73784 00:13:57.550 Received shutdown signal, test time was about 10.000000 seconds 00:13:57.550 00:13:57.550 Latency(us) 00:13:57.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.550 =================================================================================================================== 00:13:57.550 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73784' 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73784 00:13:57.550 [2024-07-15 19:51:51.558988] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73784 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73729 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73729 ']' 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73729 00:13:57.550 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:57.809 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.809 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73729 00:13:57.809 killing process with pid 73729 00:13:57.809 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:57.809 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:57.809 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73729' 00:13:57.809 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73729 00:13:57.809 [2024-07-15 19:51:51.810084] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:57.809 19:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73729 00:13:58.068 19:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:58.068 19:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:58.068 "subsystems": [ 00:13:58.068 { 00:13:58.068 "subsystem": "keyring", 00:13:58.068 "config": [] 00:13:58.068 }, 00:13:58.068 { 00:13:58.068 "subsystem": "iobuf", 00:13:58.068 "config": [ 00:13:58.068 { 00:13:58.068 "method": "iobuf_set_options", 00:13:58.068 "params": { 00:13:58.068 "small_pool_count": 8192, 00:13:58.068 "large_pool_count": 1024, 00:13:58.068 "small_bufsize": 8192, 00:13:58.068 "large_bufsize": 135168 00:13:58.068 } 00:13:58.068 } 00:13:58.068 ] 00:13:58.068 }, 00:13:58.068 { 00:13:58.068 "subsystem": "sock", 00:13:58.068 "config": [ 00:13:58.068 { 00:13:58.068 "method": "sock_set_default_impl", 00:13:58.068 "params": { 00:13:58.068 "impl_name": "uring" 00:13:58.068 } 00:13:58.068 }, 00:13:58.068 { 00:13:58.068 "method": "sock_impl_set_options", 00:13:58.068 "params": { 00:13:58.068 "impl_name": "ssl", 00:13:58.068 "recv_buf_size": 4096, 00:13:58.068 "send_buf_size": 4096, 00:13:58.068 "enable_recv_pipe": true, 00:13:58.068 "enable_quickack": false, 00:13:58.068 "enable_placement_id": 0, 00:13:58.068 "enable_zerocopy_send_server": true, 00:13:58.068 "enable_zerocopy_send_client": false, 00:13:58.068 "zerocopy_threshold": 0, 00:13:58.068 "tls_version": 0, 00:13:58.068 "enable_ktls": false 00:13:58.068 } 00:13:58.068 }, 00:13:58.068 { 00:13:58.068 "method": "sock_impl_set_options", 00:13:58.068 "params": { 00:13:58.068 "impl_name": "posix", 00:13:58.068 "recv_buf_size": 2097152, 00:13:58.068 "send_buf_size": 2097152, 00:13:58.068 "enable_recv_pipe": true, 00:13:58.068 "enable_quickack": false, 00:13:58.068 "enable_placement_id": 0, 00:13:58.068 "enable_zerocopy_send_server": true, 00:13:58.068 "enable_zerocopy_send_client": false, 00:13:58.068 "zerocopy_threshold": 0, 00:13:58.068 "tls_version": 0, 00:13:58.068 "enable_ktls": false 00:13:58.068 } 00:13:58.068 }, 00:13:58.068 { 00:13:58.068 "method": "sock_impl_set_options", 00:13:58.068 "params": { 00:13:58.068 "impl_name": "uring", 00:13:58.068 "recv_buf_size": 2097152, 00:13:58.068 "send_buf_size": 2097152, 00:13:58.068 "enable_recv_pipe": true, 00:13:58.068 "enable_quickack": false, 00:13:58.068 "enable_placement_id": 0, 00:13:58.068 "enable_zerocopy_send_server": false, 00:13:58.068 "enable_zerocopy_send_client": false, 00:13:58.068 "zerocopy_threshold": 0, 00:13:58.068 "tls_version": 0, 00:13:58.068 "enable_ktls": false 00:13:58.068 } 00:13:58.068 } 00:13:58.068 ] 00:13:58.068 }, 00:13:58.068 { 00:13:58.068 "subsystem": "vmd", 00:13:58.068 "config": [] 00:13:58.068 }, 00:13:58.068 { 00:13:58.068 "subsystem": "accel", 00:13:58.069 "config": [ 00:13:58.069 { 00:13:58.069 "method": "accel_set_options", 00:13:58.069 "params": { 00:13:58.069 "small_cache_size": 128, 00:13:58.069 "large_cache_size": 16, 00:13:58.069 "task_count": 2048, 00:13:58.069 "sequence_count": 2048, 00:13:58.069 "buf_count": 2048 00:13:58.069 } 00:13:58.069 } 00:13:58.069 ] 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "subsystem": "bdev", 00:13:58.069 "config": [ 00:13:58.069 { 00:13:58.069 "method": "bdev_set_options", 00:13:58.069 "params": { 00:13:58.069 "bdev_io_pool_size": 65535, 00:13:58.069 "bdev_io_cache_size": 256, 00:13:58.069 "bdev_auto_examine": true, 00:13:58.069 "iobuf_small_cache_size": 128, 00:13:58.069 "iobuf_large_cache_size": 16 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "bdev_raid_set_options", 00:13:58.069 "params": { 00:13:58.069 "process_window_size_kb": 1024 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "bdev_iscsi_set_options", 00:13:58.069 "params": { 00:13:58.069 "timeout_sec": 30 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "bdev_nvme_set_options", 00:13:58.069 "params": { 00:13:58.069 "action_on_timeout": "none", 00:13:58.069 "timeout_us": 0, 00:13:58.069 "timeout_admin_us": 0, 00:13:58.069 "keep_alive_timeout_ms": 10000, 00:13:58.069 "arbitration_burst": 0, 00:13:58.069 "low_priority_weight": 0, 00:13:58.069 "medium_priority_weight": 0, 00:13:58.069 "high_priority_weight": 0, 00:13:58.069 "nvme_adminq_poll_period_us": 10000, 00:13:58.069 "nvme_ioq_poll_period_us": 0, 00:13:58.069 "io_queue_requests": 0, 00:13:58.069 "delay_cmd_submit": true, 00:13:58.069 "transport_retry_count": 4, 00:13:58.069 "bdev_retry_count": 3, 00:13:58.069 "transport_ack_timeout": 0, 00:13:58.069 "ctrlr_loss_timeout_sec": 0, 00:13:58.069 "reconnect_delay_sec": 0, 00:13:58.069 "fast_io_fail_timeout_sec": 0, 00:13:58.069 "disable_auto_failback": false, 00:13:58.069 "generate_uuids": false, 00:13:58.069 "transport_tos": 0, 00:13:58.069 "nvme_error_stat": false, 00:13:58.069 "rdma_srq_size": 0, 00:13:58.069 "io_path_stat": false, 00:13:58.069 "allow_accel_sequence": false, 00:13:58.069 "rdma_max_cq_size": 0, 00:13:58.069 "rdma_cm_event_timeout_ms": 0, 00:13:58.069 "dhchap_digests": [ 00:13:58.069 "sha256", 00:13:58.069 "sha384", 00:13:58.069 "sha512" 00:13:58.069 ], 00:13:58.069 "dhchap_dhgroups": [ 00:13:58.069 "null", 00:13:58.069 "ffdhe2048", 00:13:58.069 "ffdhe3072", 00:13:58.069 "ffdhe4096", 00:13:58.069 "ffdhe6144", 00:13:58.069 "ffdhe8192" 00:13:58.069 ] 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "bdev_nvme_set_hotplug", 00:13:58.069 "params": { 00:13:58.069 "period_us": 100000, 00:13:58.069 "enable": false 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "bdev_malloc_create", 00:13:58.069 "params": { 00:13:58.069 "name": "malloc0", 00:13:58.069 "num_blocks": 8192, 00:13:58.069 "block_size": 4096, 00:13:58.069 "physical_block_size": 4096, 00:13:58.069 "uuid": "bdbdfc2f-0434-436a-bbac-6849b739f0fb", 00:13:58.069 "optimal_io_boundary": 0 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "bdev_wait_for_examine" 00:13:58.069 } 00:13:58.069 ] 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "subsystem": "nbd", 00:13:58.069 "config": [] 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "subsystem": "scheduler", 00:13:58.069 "config": [ 00:13:58.069 { 00:13:58.069 "method": "framework_set_scheduler", 00:13:58.069 "params": { 00:13:58.069 "name": "static" 00:13:58.069 } 00:13:58.069 } 00:13:58.069 ] 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "subsystem": "nvmf", 00:13:58.069 "config": [ 00:13:58.069 { 00:13:58.069 "method": "nvmf_set_config", 00:13:58.069 "params": { 00:13:58.069 "discovery_filter": "match_any", 00:13:58.069 "admin_cmd_passthru": { 00:13:58.069 "identify_ctrlr": false 00:13:58.069 } 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "nvmf_set_max_subsystems", 00:13:58.069 "params": { 00:13:58.069 "max_subsystems": 1024 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "nvmf_set_crdt", 00:13:58.069 "params": { 00:13:58.069 "crdt1": 0, 00:13:58.069 "crdt2": 0, 00:13:58.069 "crdt3": 0 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "nvmf_create_transport", 00:13:58.069 "params": { 00:13:58.069 "trtype": "TCP", 00:13:58.069 "max_queue_depth": 128, 00:13:58.069 "max_io_qpairs_per_ctrlr": 127, 00:13:58.069 "in_capsule_data_size": 4096, 00:13:58.069 "max_io_size": 131072, 00:13:58.069 "io_unit_size": 131072, 00:13:58.069 "max_aq_depth": 128, 00:13:58.069 "num_shared_buffers": 511, 00:13:58.069 "buf_cache_size": 4294967295, 00:13:58.069 "dif_insert_or_strip": false, 00:13:58.069 "zcopy": false, 00:13:58.069 "c2h_success": false, 00:13:58.069 "sock_priority": 0, 00:13:58.069 "abort_timeout_sec": 1, 00:13:58.069 "ack_timeout": 0, 00:13:58.069 "data_wr_pool_size": 0 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "nvmf_create_subsystem", 00:13:58.069 "params": { 00:13:58.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.069 "allow_any_host": false, 00:13:58.069 "serial_number": "SPDK00000000000001", 00:13:58.069 "model_number": "SPDK bdev Controller", 00:13:58.069 "max_namespaces": 10, 00:13:58.069 "min_cntlid": 1, 00:13:58.069 "max_cntlid": 65519, 00:13:58.069 "ana_reporting": false 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "nvmf_subsystem_add_host", 00:13:58.069 "params": { 00:13:58.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.069 "host": "nqn.2016-06.io.spdk:host1", 00:13:58.069 "psk": "/tmp/tmp.X4p5aVIgFR" 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "nvmf_subsystem_add_ns", 00:13:58.069 "params": { 00:13:58.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.069 "namespace": { 00:13:58.069 "nsid": 1, 00:13:58.069 "bdev_name": "malloc0", 00:13:58.069 "nguid": "BDBDFC2F0434436ABBAC6849B739F0FB", 00:13:58.069 "uuid": "bdbdfc2f-0434-436a-bbac-6849b739f0fb", 00:13:58.069 "no_auto_visible": false 00:13:58.069 } 00:13:58.069 } 00:13:58.069 }, 00:13:58.069 { 00:13:58.069 "method": "nvmf_subsystem_add_listener", 00:13:58.069 "params": { 00:13:58.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.069 "listen_address": { 00:13:58.069 "trtype": "TCP", 00:13:58.069 "adrfam": "IPv4", 00:13:58.069 "traddr": "10.0.0.2", 00:13:58.069 "trsvcid": "4420" 00:13:58.069 }, 00:13:58.069 "secure_channel": true 00:13:58.069 } 00:13:58.069 } 00:13:58.069 ] 00:13:58.069 } 00:13:58.069 ] 00:13:58.069 }' 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73827 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73827 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73827 ']' 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.069 19:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.070 [2024-07-15 19:51:52.211004] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:58.070 [2024-07-15 19:51:52.211412] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.328 [2024-07-15 19:51:52.348247] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.328 [2024-07-15 19:51:52.497415] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.328 [2024-07-15 19:51:52.497494] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.328 [2024-07-15 19:51:52.497506] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.328 [2024-07-15 19:51:52.497516] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.328 [2024-07-15 19:51:52.497524] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.328 [2024-07-15 19:51:52.497626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.586 [2024-07-15 19:51:52.686508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:58.586 [2024-07-15 19:51:52.774344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.586 [2024-07-15 19:51:52.790283] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:58.586 [2024-07-15 19:51:52.806281] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:58.586 [2024-07-15 19:51:52.806574] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73860 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73860 /var/tmp/bdevperf.sock 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73860 ']' 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:59.176 19:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:59.176 "subsystems": [ 00:13:59.176 { 00:13:59.176 "subsystem": "keyring", 00:13:59.176 "config": [] 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "subsystem": "iobuf", 00:13:59.176 "config": [ 00:13:59.176 { 00:13:59.176 "method": "iobuf_set_options", 00:13:59.176 "params": { 00:13:59.176 "small_pool_count": 8192, 00:13:59.176 "large_pool_count": 1024, 00:13:59.176 "small_bufsize": 8192, 00:13:59.176 "large_bufsize": 135168 00:13:59.176 } 00:13:59.176 } 00:13:59.176 ] 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "subsystem": "sock", 00:13:59.176 "config": [ 00:13:59.176 { 00:13:59.176 "method": "sock_set_default_impl", 00:13:59.176 "params": { 00:13:59.176 "impl_name": "uring" 00:13:59.176 } 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "method": "sock_impl_set_options", 00:13:59.176 "params": { 00:13:59.176 "impl_name": "ssl", 00:13:59.176 "recv_buf_size": 4096, 00:13:59.176 "send_buf_size": 4096, 00:13:59.176 "enable_recv_pipe": true, 00:13:59.176 "enable_quickack": false, 00:13:59.176 "enable_placement_id": 0, 00:13:59.176 "enable_zerocopy_send_server": true, 00:13:59.176 "enable_zerocopy_send_client": false, 00:13:59.176 "zerocopy_threshold": 0, 00:13:59.176 "tls_version": 0, 00:13:59.176 "enable_ktls": false 00:13:59.176 } 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "method": "sock_impl_set_options", 00:13:59.176 "params": { 00:13:59.176 "impl_name": "posix", 00:13:59.176 "recv_buf_size": 2097152, 00:13:59.176 "send_buf_size": 2097152, 00:13:59.176 "enable_recv_pipe": true, 00:13:59.176 "enable_quickack": false, 00:13:59.176 "enable_placement_id": 0, 00:13:59.176 "enable_zerocopy_send_server": true, 00:13:59.176 "enable_zerocopy_send_client": false, 00:13:59.176 "zerocopy_threshold": 0, 00:13:59.176 "tls_version": 0, 00:13:59.176 "enable_ktls": false 00:13:59.176 } 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "method": "sock_impl_set_options", 00:13:59.176 "params": { 00:13:59.176 "impl_name": "uring", 00:13:59.176 "recv_buf_size": 2097152, 00:13:59.176 "send_buf_size": 2097152, 00:13:59.176 "enable_recv_pipe": true, 00:13:59.176 "enable_quickack": false, 00:13:59.176 "enable_placement_id": 0, 00:13:59.176 "enable_zerocopy_send_server": false, 00:13:59.176 "enable_zerocopy_send_client": false, 00:13:59.176 "zerocopy_threshold": 0, 00:13:59.176 "tls_version": 0, 00:13:59.176 "enable_ktls": false 00:13:59.176 } 00:13:59.176 } 00:13:59.176 ] 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "subsystem": "vmd", 00:13:59.176 "config": [] 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "subsystem": "accel", 00:13:59.176 "config": [ 00:13:59.176 { 00:13:59.176 "method": "accel_set_options", 00:13:59.176 "params": { 00:13:59.176 "small_cache_size": 128, 00:13:59.176 "large_cache_size": 16, 00:13:59.176 "task_count": 2048, 00:13:59.176 "sequence_count": 2048, 00:13:59.176 "buf_count": 2048 00:13:59.176 } 00:13:59.176 } 00:13:59.176 ] 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "subsystem": "bdev", 00:13:59.176 "config": [ 00:13:59.176 { 00:13:59.176 "method": "bdev_set_options", 00:13:59.176 "params": { 00:13:59.176 "bdev_io_pool_size": 65535, 00:13:59.176 "bdev_io_cache_size": 256, 00:13:59.176 "bdev_auto_examine": true, 00:13:59.176 "iobuf_small_cache_size": 128, 00:13:59.176 "iobuf_large_cache_size": 16 00:13:59.176 } 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "method": "bdev_raid_set_options", 00:13:59.176 "params": { 00:13:59.176 "process_window_size_kb": 1024 00:13:59.176 } 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "method": "bdev_iscsi_set_options", 00:13:59.176 "params": { 00:13:59.176 "timeout_sec": 30 00:13:59.176 } 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "method": "bdev_nvme_set_options", 00:13:59.176 "params": { 00:13:59.176 "action_on_timeout": "none", 00:13:59.176 "timeout_us": 0, 00:13:59.176 "timeout_admin_us": 0, 00:13:59.176 "keep_alive_timeout_ms": 10000, 00:13:59.176 "arbitration_burst": 0, 00:13:59.176 "low_priority_weight": 0, 00:13:59.176 "medium_priority_weight": 0, 00:13:59.176 "high_priority_weight": 0, 00:13:59.176 "nvme_adminq_poll_period_us": 10000, 00:13:59.176 "nvme_ioq_poll_period_us": 0, 00:13:59.176 "io_queue_requests": 512, 00:13:59.176 "delay_cmd_submit": true, 00:13:59.176 "transport_retry_count": 4, 00:13:59.176 "bdev_retry_count": 3, 00:13:59.176 "transport_ack_timeout": 0, 00:13:59.176 "ctrlr_loss_timeout_sec": 0, 00:13:59.176 "reconnect_delay_sec": 0, 00:13:59.176 "fast_io_fail_timeout_sec": 0, 00:13:59.176 "disable_auto_failback": false, 00:13:59.176 "generate_uuids": false, 00:13:59.176 "transport_tos": 0, 00:13:59.176 "nvme_error_stat": false, 00:13:59.176 "rdma_srq_size": 0, 00:13:59.176 "io_path_stat": false, 00:13:59.176 "allow_accel_sequence": false, 00:13:59.176 "rdma_max_cq_size": 0, 00:13:59.176 "rdma_cm_event_timeout_ms": 0, 00:13:59.176 "dhchap_digests": [ 00:13:59.176 "sha256", 00:13:59.176 "sha384", 00:13:59.176 "sha512" 00:13:59.176 ], 00:13:59.176 "dhchap_dhgroups": [ 00:13:59.176 "null", 00:13:59.176 "ffdhe2048", 00:13:59.176 "ffdhe3072", 00:13:59.176 "ffdhe4096", 00:13:59.176 "ffdhe6144", 00:13:59.176 "ffdhe8192" 00:13:59.176 ] 00:13:59.176 } 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "method": "bdev_nvme_attach_controller", 00:13:59.176 "params": { 00:13:59.176 "name": "TLSTEST", 00:13:59.176 "trtype": "TCP", 00:13:59.176 "adrfam": "IPv4", 00:13:59.176 "traddr": "10.0.0.2", 00:13:59.176 "trsvcid": "4420", 00:13:59.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.176 "prchk_reftag": false, 00:13:59.176 "prchk_guard": false, 00:13:59.176 "ctrlr_loss_timeout_sec": 0, 00:13:59.176 "reconnect_delay_sec": 0, 00:13:59.176 "fast_io_fail_timeout_sec": 0, 00:13:59.176 "psk": "/tmp/tmp.X4p5aVIgFR", 00:13:59.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:59.176 "hdgst": false, 00:13:59.176 "ddgst": false 00:13:59.176 } 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "method": "bdev_nvme_set_hotplug", 00:13:59.176 "params": { 00:13:59.176 "period_us": 100000, 00:13:59.176 "enable": false 00:13:59.176 } 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "method": "bdev_wait_for_examine" 00:13:59.176 } 00:13:59.176 ] 00:13:59.176 }, 00:13:59.176 { 00:13:59.176 "subsystem": "nbd", 00:13:59.176 "config": [] 00:13:59.176 } 00:13:59.176 ] 00:13:59.176 }' 00:13:59.176 [2024-07-15 19:51:53.212742] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:59.177 [2024-07-15 19:51:53.212841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73860 ] 00:13:59.177 [2024-07-15 19:51:53.351387] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.434 [2024-07-15 19:51:53.482954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.434 [2024-07-15 19:51:53.618165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:59.434 [2024-07-15 19:51:53.657917] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:59.434 [2024-07-15 19:51:53.658370] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:59.999 19:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.999 19:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:59.999 19:51:54 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:00.258 Running I/O for 10 seconds... 00:14:10.228 00:14:10.228 Latency(us) 00:14:10.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.228 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:10.228 Verification LBA range: start 0x0 length 0x2000 00:14:10.228 TLSTESTn1 : 10.02 3888.39 15.19 0.00 0.00 32855.06 7000.44 31933.91 00:14:10.228 =================================================================================================================== 00:14:10.228 Total : 3888.39 15.19 0.00 0.00 32855.06 7000.44 31933.91 00:14:10.228 0 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73860 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73860 ']' 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73860 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73860 00:14:10.228 killing process with pid 73860 00:14:10.228 Received shutdown signal, test time was about 10.000000 seconds 00:14:10.228 00:14:10.228 Latency(us) 00:14:10.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.228 =================================================================================================================== 00:14:10.228 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73860' 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73860 00:14:10.228 [2024-07-15 19:52:04.340092] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:10.228 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73860 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73827 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73827 ']' 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73827 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73827 00:14:10.520 killing process with pid 73827 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73827' 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73827 00:14:10.520 [2024-07-15 19:52:04.690638] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:10.520 19:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73827 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74000 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74000 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74000 ']' 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.786 19:52:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.045 [2024-07-15 19:52:05.105378] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:11.045 [2024-07-15 19:52:05.105516] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.045 [2024-07-15 19:52:05.260199] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.304 [2024-07-15 19:52:05.388514] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.304 [2024-07-15 19:52:05.388582] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.304 [2024-07-15 19:52:05.388605] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.304 [2024-07-15 19:52:05.388621] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.304 [2024-07-15 19:52:05.388635] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.304 [2024-07-15 19:52:05.388677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.304 [2024-07-15 19:52:05.446616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:11.871 19:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.871 19:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:11.871 19:52:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.871 19:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.871 19:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.871 19:52:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.871 19:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.X4p5aVIgFR 00:14:11.871 19:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.X4p5aVIgFR 00:14:11.871 19:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:12.129 [2024-07-15 19:52:06.320975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.129 19:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:12.387 19:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:12.645 [2024-07-15 19:52:06.845109] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:12.645 [2024-07-15 19:52:06.845378] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.645 19:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:12.904 malloc0 00:14:12.904 19:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:13.162 19:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X4p5aVIgFR 00:14:13.421 [2024-07-15 19:52:07.568463] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74053 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74053 /var/tmp/bdevperf.sock 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74053 ']' 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.421 19:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.421 [2024-07-15 19:52:07.648142] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:13.421 [2024-07-15 19:52:07.648515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74053 ] 00:14:13.680 [2024-07-15 19:52:07.789850] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.939 [2024-07-15 19:52:07.948183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.939 [2024-07-15 19:52:08.022492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:14.506 19:52:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.506 19:52:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:14.506 19:52:08 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X4p5aVIgFR 00:14:14.764 19:52:08 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:15.023 [2024-07-15 19:52:09.058923] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.023 nvme0n1 00:14:15.023 19:52:09 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:15.318 Running I/O for 1 seconds... 00:14:16.253 00:14:16.253 Latency(us) 00:14:16.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.254 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.254 Verification LBA range: start 0x0 length 0x2000 00:14:16.254 nvme0n1 : 1.03 3549.64 13.87 0.00 0.00 35488.51 7804.74 20971.52 00:14:16.254 =================================================================================================================== 00:14:16.254 Total : 3549.64 13.87 0.00 0.00 35488.51 7804.74 20971.52 00:14:16.254 0 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74053 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74053 ']' 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74053 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74053 00:14:16.254 killing process with pid 74053 00:14:16.254 Received shutdown signal, test time was about 1.000000 seconds 00:14:16.254 00:14:16.254 Latency(us) 00:14:16.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.254 =================================================================================================================== 00:14:16.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74053' 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74053 00:14:16.254 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74053 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 74000 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74000 ']' 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74000 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74000 00:14:16.512 killing process with pid 74000 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74000' 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74000 00:14:16.512 [2024-07-15 19:52:10.696151] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:16.512 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74000 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74110 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74110 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74110 ']' 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.771 19:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.029 [2024-07-15 19:52:11.025104] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:17.029 [2024-07-15 19:52:11.025253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.029 [2024-07-15 19:52:11.166654] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.288 [2024-07-15 19:52:11.284420] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.288 [2024-07-15 19:52:11.284482] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.288 [2024-07-15 19:52:11.284495] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.288 [2024-07-15 19:52:11.284503] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.288 [2024-07-15 19:52:11.284510] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.288 [2024-07-15 19:52:11.284537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.288 [2024-07-15 19:52:11.338832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:17.856 19:52:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.856 19:52:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:17.856 19:52:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.856 19:52:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.856 19:52:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.856 19:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.856 19:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:14:17.856 19:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.856 19:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.856 [2024-07-15 19:52:12.038985] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.856 malloc0 00:14:17.856 [2024-07-15 19:52:12.070219] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:17.856 [2024-07-15 19:52:12.070631] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=74142 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 74142 /var/tmp/bdevperf.sock 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74142 ']' 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.114 19:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.114 [2024-07-15 19:52:12.148049] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:18.114 [2024-07-15 19:52:12.148322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74142 ] 00:14:18.114 [2024-07-15 19:52:12.285524] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.372 [2024-07-15 19:52:12.415250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.372 [2024-07-15 19:52:12.471577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:19.076 19:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.076 19:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:19.076 19:52:13 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X4p5aVIgFR 00:14:19.333 19:52:13 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:19.592 [2024-07-15 19:52:13.787221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.850 nvme0n1 00:14:19.850 19:52:13 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:19.850 Running I/O for 1 seconds... 00:14:21.223 00:14:21.223 Latency(us) 00:14:21.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.223 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:21.223 Verification LBA range: start 0x0 length 0x2000 00:14:21.223 nvme0n1 : 1.02 3663.41 14.31 0.00 0.00 34526.90 9175.04 29550.78 00:14:21.223 =================================================================================================================== 00:14:21.223 Total : 3663.41 14.31 0.00 0.00 34526.90 9175.04 29550.78 00:14:21.223 0 00:14:21.223 19:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:14:21.223 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.223 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.223 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.223 19:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:14:21.223 "subsystems": [ 00:14:21.223 { 00:14:21.223 "subsystem": "keyring", 00:14:21.223 "config": [ 00:14:21.223 { 00:14:21.223 "method": "keyring_file_add_key", 00:14:21.223 "params": { 00:14:21.223 "name": "key0", 00:14:21.223 "path": "/tmp/tmp.X4p5aVIgFR" 00:14:21.223 } 00:14:21.223 } 00:14:21.223 ] 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "subsystem": "iobuf", 00:14:21.223 "config": [ 00:14:21.223 { 00:14:21.223 "method": "iobuf_set_options", 00:14:21.223 "params": { 00:14:21.223 "small_pool_count": 8192, 00:14:21.223 "large_pool_count": 1024, 00:14:21.223 "small_bufsize": 8192, 00:14:21.223 "large_bufsize": 135168 00:14:21.223 } 00:14:21.223 } 00:14:21.223 ] 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "subsystem": "sock", 00:14:21.223 "config": [ 00:14:21.223 { 00:14:21.223 "method": "sock_set_default_impl", 00:14:21.223 "params": { 00:14:21.223 "impl_name": "uring" 00:14:21.223 } 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "method": "sock_impl_set_options", 00:14:21.223 "params": { 00:14:21.223 "impl_name": "ssl", 00:14:21.223 "recv_buf_size": 4096, 00:14:21.223 "send_buf_size": 4096, 00:14:21.223 "enable_recv_pipe": true, 00:14:21.223 "enable_quickack": false, 00:14:21.223 "enable_placement_id": 0, 00:14:21.223 "enable_zerocopy_send_server": true, 00:14:21.223 "enable_zerocopy_send_client": false, 00:14:21.223 "zerocopy_threshold": 0, 00:14:21.223 "tls_version": 0, 00:14:21.223 "enable_ktls": false 00:14:21.223 } 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "method": "sock_impl_set_options", 00:14:21.223 "params": { 00:14:21.223 "impl_name": "posix", 00:14:21.223 "recv_buf_size": 2097152, 00:14:21.223 "send_buf_size": 2097152, 00:14:21.223 "enable_recv_pipe": true, 00:14:21.223 "enable_quickack": false, 00:14:21.223 "enable_placement_id": 0, 00:14:21.223 "enable_zerocopy_send_server": true, 00:14:21.223 "enable_zerocopy_send_client": false, 00:14:21.223 "zerocopy_threshold": 0, 00:14:21.223 "tls_version": 0, 00:14:21.223 "enable_ktls": false 00:14:21.223 } 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "method": "sock_impl_set_options", 00:14:21.223 "params": { 00:14:21.223 "impl_name": "uring", 00:14:21.223 "recv_buf_size": 2097152, 00:14:21.223 "send_buf_size": 2097152, 00:14:21.223 "enable_recv_pipe": true, 00:14:21.223 "enable_quickack": false, 00:14:21.223 "enable_placement_id": 0, 00:14:21.223 "enable_zerocopy_send_server": false, 00:14:21.223 "enable_zerocopy_send_client": false, 00:14:21.223 "zerocopy_threshold": 0, 00:14:21.223 "tls_version": 0, 00:14:21.223 "enable_ktls": false 00:14:21.223 } 00:14:21.223 } 00:14:21.223 ] 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "subsystem": "vmd", 00:14:21.223 "config": [] 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "subsystem": "accel", 00:14:21.223 "config": [ 00:14:21.223 { 00:14:21.223 "method": "accel_set_options", 00:14:21.223 "params": { 00:14:21.223 "small_cache_size": 128, 00:14:21.223 "large_cache_size": 16, 00:14:21.223 "task_count": 2048, 00:14:21.223 "sequence_count": 2048, 00:14:21.223 "buf_count": 2048 00:14:21.223 } 00:14:21.223 } 00:14:21.223 ] 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "subsystem": "bdev", 00:14:21.223 "config": [ 00:14:21.223 { 00:14:21.223 "method": "bdev_set_options", 00:14:21.223 "params": { 00:14:21.223 "bdev_io_pool_size": 65535, 00:14:21.223 "bdev_io_cache_size": 256, 00:14:21.223 "bdev_auto_examine": true, 00:14:21.223 "iobuf_small_cache_size": 128, 00:14:21.223 "iobuf_large_cache_size": 16 00:14:21.223 } 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "method": "bdev_raid_set_options", 00:14:21.223 "params": { 00:14:21.223 "process_window_size_kb": 1024 00:14:21.223 } 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "method": "bdev_iscsi_set_options", 00:14:21.223 "params": { 00:14:21.223 "timeout_sec": 30 00:14:21.223 } 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "method": "bdev_nvme_set_options", 00:14:21.223 "params": { 00:14:21.223 "action_on_timeout": "none", 00:14:21.223 "timeout_us": 0, 00:14:21.223 "timeout_admin_us": 0, 00:14:21.223 "keep_alive_timeout_ms": 10000, 00:14:21.223 "arbitration_burst": 0, 00:14:21.223 "low_priority_weight": 0, 00:14:21.223 "medium_priority_weight": 0, 00:14:21.223 "high_priority_weight": 0, 00:14:21.223 "nvme_adminq_poll_period_us": 10000, 00:14:21.223 "nvme_ioq_poll_period_us": 0, 00:14:21.223 "io_queue_requests": 0, 00:14:21.223 "delay_cmd_submit": true, 00:14:21.223 "transport_retry_count": 4, 00:14:21.223 "bdev_retry_count": 3, 00:14:21.223 "transport_ack_timeout": 0, 00:14:21.223 "ctrlr_loss_timeout_sec": 0, 00:14:21.223 "reconnect_delay_sec": 0, 00:14:21.223 "fast_io_fail_timeout_sec": 0, 00:14:21.223 "disable_auto_failback": false, 00:14:21.223 "generate_uuids": false, 00:14:21.223 "transport_tos": 0, 00:14:21.223 "nvme_error_stat": false, 00:14:21.223 "rdma_srq_size": 0, 00:14:21.223 "io_path_stat": false, 00:14:21.223 "allow_accel_sequence": false, 00:14:21.223 "rdma_max_cq_size": 0, 00:14:21.223 "rdma_cm_event_timeout_ms": 0, 00:14:21.223 "dhchap_digests": [ 00:14:21.223 "sha256", 00:14:21.223 "sha384", 00:14:21.223 "sha512" 00:14:21.223 ], 00:14:21.223 "dhchap_dhgroups": [ 00:14:21.223 "null", 00:14:21.223 "ffdhe2048", 00:14:21.223 "ffdhe3072", 00:14:21.223 "ffdhe4096", 00:14:21.223 "ffdhe6144", 00:14:21.223 "ffdhe8192" 00:14:21.223 ] 00:14:21.223 } 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "method": "bdev_nvme_set_hotplug", 00:14:21.223 "params": { 00:14:21.223 "period_us": 100000, 00:14:21.223 "enable": false 00:14:21.223 } 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "method": "bdev_malloc_create", 00:14:21.223 "params": { 00:14:21.223 "name": "malloc0", 00:14:21.223 "num_blocks": 8192, 00:14:21.223 "block_size": 4096, 00:14:21.223 "physical_block_size": 4096, 00:14:21.223 "uuid": "8038c602-b8c0-4a78-89c3-b9dc6552d89c", 00:14:21.223 "optimal_io_boundary": 0 00:14:21.223 } 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "method": "bdev_wait_for_examine" 00:14:21.223 } 00:14:21.223 ] 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "subsystem": "nbd", 00:14:21.223 "config": [] 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "subsystem": "scheduler", 00:14:21.223 "config": [ 00:14:21.223 { 00:14:21.223 "method": "framework_set_scheduler", 00:14:21.223 "params": { 00:14:21.223 "name": "static" 00:14:21.223 } 00:14:21.223 } 00:14:21.223 ] 00:14:21.223 }, 00:14:21.223 { 00:14:21.223 "subsystem": "nvmf", 00:14:21.223 "config": [ 00:14:21.223 { 00:14:21.223 "method": "nvmf_set_config", 00:14:21.223 "params": { 00:14:21.224 "discovery_filter": "match_any", 00:14:21.224 "admin_cmd_passthru": { 00:14:21.224 "identify_ctrlr": false 00:14:21.224 } 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "nvmf_set_max_subsystems", 00:14:21.224 "params": { 00:14:21.224 "max_subsystems": 1024 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "nvmf_set_crdt", 00:14:21.224 "params": { 00:14:21.224 "crdt1": 0, 00:14:21.224 "crdt2": 0, 00:14:21.224 "crdt3": 0 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "nvmf_create_transport", 00:14:21.224 "params": { 00:14:21.224 "trtype": "TCP", 00:14:21.224 "max_queue_depth": 128, 00:14:21.224 "max_io_qpairs_per_ctrlr": 127, 00:14:21.224 "in_capsule_data_size": 4096, 00:14:21.224 "max_io_size": 131072, 00:14:21.224 "io_unit_size": 131072, 00:14:21.224 "max_aq_depth": 128, 00:14:21.224 "num_shared_buffers": 511, 00:14:21.224 "buf_cache_size": 4294967295, 00:14:21.224 "dif_insert_or_strip": false, 00:14:21.224 "zcopy": false, 00:14:21.224 "c2h_success": false, 00:14:21.224 "sock_priority": 0, 00:14:21.224 "abort_timeout_sec": 1, 00:14:21.224 "ack_timeout": 0, 00:14:21.224 "data_wr_pool_size": 0 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "nvmf_create_subsystem", 00:14:21.224 "params": { 00:14:21.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.224 "allow_any_host": false, 00:14:21.224 "serial_number": "00000000000000000000", 00:14:21.224 "model_number": "SPDK bdev Controller", 00:14:21.224 "max_namespaces": 32, 00:14:21.224 "min_cntlid": 1, 00:14:21.224 "max_cntlid": 65519, 00:14:21.224 "ana_reporting": false 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "nvmf_subsystem_add_host", 00:14:21.224 "params": { 00:14:21.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.224 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.224 "psk": "key0" 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "nvmf_subsystem_add_ns", 00:14:21.224 "params": { 00:14:21.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.224 "namespace": { 00:14:21.224 "nsid": 1, 00:14:21.224 "bdev_name": "malloc0", 00:14:21.224 "nguid": "8038C602B8C04A7889C3B9DC6552D89C", 00:14:21.224 "uuid": "8038c602-b8c0-4a78-89c3-b9dc6552d89c", 00:14:21.224 "no_auto_visible": false 00:14:21.224 } 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "nvmf_subsystem_add_listener", 00:14:21.224 "params": { 00:14:21.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.224 "listen_address": { 00:14:21.224 "trtype": "TCP", 00:14:21.224 "adrfam": "IPv4", 00:14:21.224 "traddr": "10.0.0.2", 00:14:21.224 "trsvcid": "4420" 00:14:21.224 }, 00:14:21.224 "secure_channel": false, 00:14:21.224 "sock_impl": "ssl" 00:14:21.224 } 00:14:21.224 } 00:14:21.224 ] 00:14:21.224 } 00:14:21.224 ] 00:14:21.224 }' 00:14:21.224 19:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:21.224 19:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:14:21.224 "subsystems": [ 00:14:21.224 { 00:14:21.224 "subsystem": "keyring", 00:14:21.224 "config": [ 00:14:21.224 { 00:14:21.224 "method": "keyring_file_add_key", 00:14:21.224 "params": { 00:14:21.224 "name": "key0", 00:14:21.224 "path": "/tmp/tmp.X4p5aVIgFR" 00:14:21.224 } 00:14:21.224 } 00:14:21.224 ] 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "subsystem": "iobuf", 00:14:21.224 "config": [ 00:14:21.224 { 00:14:21.224 "method": "iobuf_set_options", 00:14:21.224 "params": { 00:14:21.224 "small_pool_count": 8192, 00:14:21.224 "large_pool_count": 1024, 00:14:21.224 "small_bufsize": 8192, 00:14:21.224 "large_bufsize": 135168 00:14:21.224 } 00:14:21.224 } 00:14:21.224 ] 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "subsystem": "sock", 00:14:21.224 "config": [ 00:14:21.224 { 00:14:21.224 "method": "sock_set_default_impl", 00:14:21.224 "params": { 00:14:21.224 "impl_name": "uring" 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "sock_impl_set_options", 00:14:21.224 "params": { 00:14:21.224 "impl_name": "ssl", 00:14:21.224 "recv_buf_size": 4096, 00:14:21.224 "send_buf_size": 4096, 00:14:21.224 "enable_recv_pipe": true, 00:14:21.224 "enable_quickack": false, 00:14:21.224 "enable_placement_id": 0, 00:14:21.224 "enable_zerocopy_send_server": true, 00:14:21.224 "enable_zerocopy_send_client": false, 00:14:21.224 "zerocopy_threshold": 0, 00:14:21.224 "tls_version": 0, 00:14:21.224 "enable_ktls": false 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "sock_impl_set_options", 00:14:21.224 "params": { 00:14:21.224 "impl_name": "posix", 00:14:21.224 "recv_buf_size": 2097152, 00:14:21.224 "send_buf_size": 2097152, 00:14:21.224 "enable_recv_pipe": true, 00:14:21.224 "enable_quickack": false, 00:14:21.224 "enable_placement_id": 0, 00:14:21.224 "enable_zerocopy_send_server": true, 00:14:21.224 "enable_zerocopy_send_client": false, 00:14:21.224 "zerocopy_threshold": 0, 00:14:21.224 "tls_version": 0, 00:14:21.224 "enable_ktls": false 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "sock_impl_set_options", 00:14:21.224 "params": { 00:14:21.224 "impl_name": "uring", 00:14:21.224 "recv_buf_size": 2097152, 00:14:21.224 "send_buf_size": 2097152, 00:14:21.224 "enable_recv_pipe": true, 00:14:21.224 "enable_quickack": false, 00:14:21.224 "enable_placement_id": 0, 00:14:21.224 "enable_zerocopy_send_server": false, 00:14:21.224 "enable_zerocopy_send_client": false, 00:14:21.224 "zerocopy_threshold": 0, 00:14:21.224 "tls_version": 0, 00:14:21.224 "enable_ktls": false 00:14:21.224 } 00:14:21.224 } 00:14:21.224 ] 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "subsystem": "vmd", 00:14:21.224 "config": [] 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "subsystem": "accel", 00:14:21.224 "config": [ 00:14:21.224 { 00:14:21.224 "method": "accel_set_options", 00:14:21.224 "params": { 00:14:21.224 "small_cache_size": 128, 00:14:21.224 "large_cache_size": 16, 00:14:21.224 "task_count": 2048, 00:14:21.224 "sequence_count": 2048, 00:14:21.224 "buf_count": 2048 00:14:21.224 } 00:14:21.224 } 00:14:21.224 ] 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "subsystem": "bdev", 00:14:21.224 "config": [ 00:14:21.224 { 00:14:21.224 "method": "bdev_set_options", 00:14:21.224 "params": { 00:14:21.224 "bdev_io_pool_size": 65535, 00:14:21.224 "bdev_io_cache_size": 256, 00:14:21.224 "bdev_auto_examine": true, 00:14:21.224 "iobuf_small_cache_size": 128, 00:14:21.224 "iobuf_large_cache_size": 16 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "bdev_raid_set_options", 00:14:21.224 "params": { 00:14:21.224 "process_window_size_kb": 1024 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "bdev_iscsi_set_options", 00:14:21.224 "params": { 00:14:21.224 "timeout_sec": 30 00:14:21.224 } 00:14:21.224 }, 00:14:21.224 { 00:14:21.224 "method": "bdev_nvme_set_options", 00:14:21.224 "params": { 00:14:21.224 "action_on_timeout": "none", 00:14:21.224 "timeout_us": 0, 00:14:21.224 "timeout_admin_us": 0, 00:14:21.224 "keep_alive_timeout_ms": 10000, 00:14:21.224 "arbitration_burst": 0, 00:14:21.224 "low_priority_weight": 0, 00:14:21.224 "medium_priority_weight": 0, 00:14:21.224 "high_priority_weight": 0, 00:14:21.224 "nvme_adminq_poll_period_us": 10000, 00:14:21.224 "nvme_ioq_poll_period_us": 0, 00:14:21.224 "io_queue_requests": 512, 00:14:21.224 "delay_cmd_submit": true, 00:14:21.224 "transport_retry_count": 4, 00:14:21.224 "bdev_retry_count": 3, 00:14:21.224 "transport_ack_timeout": 0, 00:14:21.224 "ctrlr_loss_timeout_sec": 0, 00:14:21.224 "reconnect_delay_sec": 0, 00:14:21.224 "fast_io_fail_timeout_sec": 0, 00:14:21.224 "disable_auto_failback": false, 00:14:21.224 "generate_uuids": false, 00:14:21.224 "transport_tos": 0, 00:14:21.224 "nvme_error_stat": false, 00:14:21.224 "rdma_srq_size": 0, 00:14:21.224 "io_path_stat": false, 00:14:21.224 "allow_accel_sequence": false, 00:14:21.224 "rdma_max_cq_size": 0, 00:14:21.224 "rdma_cm_event_timeout_ms": 0, 00:14:21.224 "dhchap_digests": [ 00:14:21.224 "sha256", 00:14:21.224 "sha384", 00:14:21.224 "sha512" 00:14:21.224 ], 00:14:21.224 "dhchap_dhgroups": [ 00:14:21.224 "null", 00:14:21.224 "ffdhe2048", 00:14:21.224 "ffdhe3072", 00:14:21.224 "ffdhe4096", 00:14:21.224 "ffdhe6144", 00:14:21.224 "ffdhe8192" 00:14:21.225 ] 00:14:21.225 } 00:14:21.225 }, 00:14:21.225 { 00:14:21.225 "method": "bdev_nvme_attach_controller", 00:14:21.225 "params": { 00:14:21.225 "name": "nvme0", 00:14:21.225 "trtype": "TCP", 00:14:21.225 "adrfam": "IPv4", 00:14:21.225 "traddr": "10.0.0.2", 00:14:21.225 "trsvcid": "4420", 00:14:21.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.225 "prchk_reftag": false, 00:14:21.225 "prchk_guard": false, 00:14:21.225 "ctrlr_loss_timeout_sec": 0, 00:14:21.225 "reconnect_delay_sec": 0, 00:14:21.225 "fast_io_fail_timeout_sec": 0, 00:14:21.225 "psk": "key0", 00:14:21.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.225 "hdgst": false, 00:14:21.225 "ddgst": false 00:14:21.225 } 00:14:21.225 }, 00:14:21.225 { 00:14:21.225 "method": "bdev_nvme_set_hotplug", 00:14:21.225 "params": { 00:14:21.225 "period_us": 100000, 00:14:21.225 "enable": false 00:14:21.225 } 00:14:21.225 }, 00:14:21.225 { 00:14:21.225 "method": "bdev_enable_histogram", 00:14:21.225 "params": { 00:14:21.225 "name": "nvme0n1", 00:14:21.225 "enable": true 00:14:21.225 } 00:14:21.225 }, 00:14:21.225 { 00:14:21.225 "method": "bdev_wait_for_examine" 00:14:21.225 } 00:14:21.225 ] 00:14:21.225 }, 00:14:21.225 { 00:14:21.225 "subsystem": "nbd", 00:14:21.225 "config": [] 00:14:21.225 } 00:14:21.225 ] 00:14:21.225 }' 00:14:21.225 19:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 74142 00:14:21.225 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74142 ']' 00:14:21.225 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74142 00:14:21.225 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:21.484 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.484 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74142 00:14:21.484 killing process with pid 74142 00:14:21.484 Received shutdown signal, test time was about 1.000000 seconds 00:14:21.484 00:14:21.484 Latency(us) 00:14:21.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.484 =================================================================================================================== 00:14:21.484 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.484 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:21.484 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:21.484 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74142' 00:14:21.484 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74142 00:14:21.484 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74142 00:14:21.742 19:52:15 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 74110 00:14:21.742 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74110 ']' 00:14:21.742 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74110 00:14:21.742 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:21.742 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.742 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74110 00:14:21.742 killing process with pid 74110 00:14:21.742 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:21.742 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:21.743 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74110' 00:14:21.743 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74110 00:14:21.743 19:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74110 00:14:22.002 19:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:14:22.002 19:52:16 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:14:22.002 "subsystems": [ 00:14:22.002 { 00:14:22.002 "subsystem": "keyring", 00:14:22.002 "config": [ 00:14:22.002 { 00:14:22.002 "method": "keyring_file_add_key", 00:14:22.002 "params": { 00:14:22.002 "name": "key0", 00:14:22.003 "path": "/tmp/tmp.X4p5aVIgFR" 00:14:22.003 } 00:14:22.003 } 00:14:22.003 ] 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "subsystem": "iobuf", 00:14:22.003 "config": [ 00:14:22.003 { 00:14:22.003 "method": "iobuf_set_options", 00:14:22.003 "params": { 00:14:22.003 "small_pool_count": 8192, 00:14:22.003 "large_pool_count": 1024, 00:14:22.003 "small_bufsize": 8192, 00:14:22.003 "large_bufsize": 135168 00:14:22.003 } 00:14:22.003 } 00:14:22.003 ] 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "subsystem": "sock", 00:14:22.003 "config": [ 00:14:22.003 { 00:14:22.003 "method": "sock_set_default_impl", 00:14:22.003 "params": { 00:14:22.003 "impl_name": "uring" 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "sock_impl_set_options", 00:14:22.003 "params": { 00:14:22.003 "impl_name": "ssl", 00:14:22.003 "recv_buf_size": 4096, 00:14:22.003 "send_buf_size": 4096, 00:14:22.003 "enable_recv_pipe": true, 00:14:22.003 "enable_quickack": false, 00:14:22.003 "enable_placement_id": 0, 00:14:22.003 "enable_zerocopy_send_server": true, 00:14:22.003 "enable_zerocopy_send_client": false, 00:14:22.003 "zerocopy_threshold": 0, 00:14:22.003 "tls_version": 0, 00:14:22.003 "enable_ktls": false 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "sock_impl_set_options", 00:14:22.003 "params": { 00:14:22.003 "impl_name": "posix", 00:14:22.003 "recv_buf_size": 2097152, 00:14:22.003 "send_buf_size": 2097152, 00:14:22.003 "enable_recv_pipe": true, 00:14:22.003 "enable_quickack": false, 00:14:22.003 "enable_placement_id": 0, 00:14:22.003 "enable_zerocopy_send_server": true, 00:14:22.003 "enable_zerocopy_send_client": false, 00:14:22.003 "zerocopy_threshold": 0, 00:14:22.003 "tls_version": 0, 00:14:22.003 "enable_ktls": false 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "sock_impl_set_options", 00:14:22.003 "params": { 00:14:22.003 "impl_name": "uring", 00:14:22.003 "recv_buf_size": 2097152, 00:14:22.003 "send_buf_size": 2097152, 00:14:22.003 "enable_recv_pipe": true, 00:14:22.003 "enable_quickack": false, 00:14:22.003 "enable_placement_id": 0, 00:14:22.003 "enable_zerocopy_send_server": false, 00:14:22.003 "enable_zerocopy_send_client": false, 00:14:22.003 "zerocopy_threshold": 0, 00:14:22.003 "tls_version": 0, 00:14:22.003 "enable_ktls": false 00:14:22.003 } 00:14:22.003 } 00:14:22.003 ] 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "subsystem": "vmd", 00:14:22.003 "config": [] 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "subsystem": "accel", 00:14:22.003 "config": [ 00:14:22.003 { 00:14:22.003 "method": "accel_set_options", 00:14:22.003 "params": { 00:14:22.003 "small_cache_size": 128, 00:14:22.003 "large_cache_size": 16, 00:14:22.003 "task_count": 2048, 00:14:22.003 "sequence_count": 2048, 00:14:22.003 "buf_count": 2048 00:14:22.003 } 00:14:22.003 } 00:14:22.003 ] 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "subsystem": "bdev", 00:14:22.003 "config": [ 00:14:22.003 { 00:14:22.003 "method": "bdev_set_options", 00:14:22.003 "params": { 00:14:22.003 "bdev_io_pool_size": 65535, 00:14:22.003 "bdev_io_cache_size": 256, 00:14:22.003 "bdev_auto_examine": true, 00:14:22.003 "iobuf_small_cache_size": 128, 00:14:22.003 "iobuf_large_cache_size": 16 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "bdev_raid_set_options", 00:14:22.003 "params": { 00:14:22.003 "process_window_size_kb": 1024 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "bdev_iscsi_set_options", 00:14:22.003 "params": { 00:14:22.003 "timeout_sec": 30 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "bdev_nvme_set_options", 00:14:22.003 "params": { 00:14:22.003 "action_on_timeout": "none", 00:14:22.003 "timeout_us": 0, 00:14:22.003 "timeout_admin_us": 0, 00:14:22.003 "keep_alive_timeout_ms": 10000, 00:14:22.003 "arbitration_burst": 0, 00:14:22.003 "low_priority_weight": 0, 00:14:22.003 "medium_priority_weight": 0, 00:14:22.003 "high_priority_weight": 0, 00:14:22.003 "nvme_adminq_poll_period_us": 10000, 00:14:22.003 "nvme_ioq_poll_period_us": 0, 00:14:22.003 "io_queue_requests": 0, 00:14:22.003 "delay_cmd_submit": true, 00:14:22.003 "transport_retry_count": 4, 00:14:22.003 "bdev_retry_count": 3, 00:14:22.003 "transport_ack_timeout": 0, 00:14:22.003 "ctrlr_loss_timeout_sec": 0, 00:14:22.003 "reconnect_delay_sec": 0, 00:14:22.003 "fast_io_fail_timeout_sec": 0, 00:14:22.003 "disable_auto_failback": false, 00:14:22.003 "generate_uuids": false, 00:14:22.003 "transport_tos": 0, 00:14:22.003 "nvme_error_stat": false, 00:14:22.003 "rdma_srq_size": 0, 00:14:22.003 "io_path_stat": false, 00:14:22.003 "allow_accel_sequence": false, 00:14:22.003 "rdma_max_cq_size": 0, 00:14:22.003 "rdma_cm_event_timeout_ms": 0, 00:14:22.003 "dhchap_digests": [ 00:14:22.003 "sha256", 00:14:22.003 "sha384", 00:14:22.003 "sha512" 00:14:22.003 ], 00:14:22.003 "dhchap_dhgroups": [ 00:14:22.003 "null", 00:14:22.003 "ffdhe2048", 00:14:22.003 "ffdhe3072", 00:14:22.003 "ffdhe4096", 00:14:22.003 "ffdhe6144", 00:14:22.003 "ffdhe8192" 00:14:22.003 ] 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "bdev_nvme_set_hotplug", 00:14:22.003 "params": { 00:14:22.003 "period_us": 100000, 00:14:22.003 "enable": false 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "bdev_malloc_create", 00:14:22.003 "params": { 00:14:22.003 "name": "malloc0", 00:14:22.003 "num_blocks": 8192, 00:14:22.003 "block_size": 4096, 00:14:22.003 "physical_block_size": 4096, 00:14:22.003 "uuid": "8038c602-b8c0-4a78-89c3-b9dc6552d89c", 00:14:22.003 "optimal_io_boundary": 0 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "bdev_wait_for_examine" 00:14:22.003 } 00:14:22.003 ] 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "subsystem": "nbd", 00:14:22.003 "config": [] 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "subsystem": "scheduler", 00:14:22.003 "config": [ 00:14:22.003 { 00:14:22.003 "method": "framework_set_scheduler", 00:14:22.003 "params": { 00:14:22.003 "name": "static" 00:14:22.003 } 00:14:22.003 } 00:14:22.003 ] 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "subsystem": "nvmf", 00:14:22.003 "config": [ 00:14:22.003 { 00:14:22.003 "method": "nvmf_set_config", 00:14:22.003 "params": { 00:14:22.003 "discovery_filter": "match_any", 00:14:22.003 "admin_cmd_passthru": { 00:14:22.003 "identify_ctrlr": false 00:14:22.003 } 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "nvmf_set_max_subsystems", 00:14:22.003 "params": { 00:14:22.003 "max_subsystems": 1024 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "nvmf_set_crdt", 00:14:22.003 "params": { 00:14:22.003 "crdt1": 0, 00:14:22.003 "crdt2": 0, 00:14:22.003 "crdt3": 0 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "nvmf_create_transport", 00:14:22.003 "params": { 00:14:22.003 "trtype": "TCP", 00:14:22.003 "max_queue_depth": 128, 00:14:22.003 "max_io_qpairs_per_ctrlr": 127, 00:14:22.003 "in_capsule_data_size": 4096, 00:14:22.003 "max_io_size": 131072, 00:14:22.003 "io_unit_size": 131072, 00:14:22.003 "max_aq_depth": 128, 00:14:22.003 "num_shared_buffers": 511, 00:14:22.003 "buf_cache_size": 4294967295, 00:14:22.003 "dif_insert_or_strip": false, 00:14:22.003 "zcopy": false, 00:14:22.003 "c2h_success": false, 00:14:22.003 "sock_priority": 0, 00:14:22.003 "abort_timeout_sec": 1, 00:14:22.003 "ack_timeout": 0, 00:14:22.003 "data_wr_pool_size": 0 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "nvmf_create_subsystem", 00:14:22.003 "params": { 00:14:22.003 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.003 "allow_any_host": false, 00:14:22.003 "serial_number": "00000000000000000000", 00:14:22.003 "model_number": "SPDK bdev Controller", 00:14:22.003 "max_namespaces": 32, 00:14:22.003 "min_cntlid": 1, 00:14:22.003 "max_cntlid": 65519, 00:14:22.003 "ana_reporting": false 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "nvmf_subsystem_add_host", 00:14:22.003 "params": { 00:14:22.003 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.003 "host": "nqn.2016-06.io.spdk:host1", 00:14:22.003 "psk": "key0" 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "nvmf_subsystem_add_ns", 00:14:22.003 "params": { 00:14:22.003 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.003 "namespace": { 00:14:22.003 "nsid": 1, 00:14:22.003 "bdev_name": "malloc0", 00:14:22.003 "nguid": "8038C602B8C04A7889C3B9DC6552D89C", 00:14:22.003 "uuid": "8038c602-b8c0-4a78-89c3-b9dc6552d89c", 00:14:22.003 "no_auto_visible": false 00:14:22.003 } 00:14:22.003 } 00:14:22.003 }, 00:14:22.003 { 00:14:22.003 "method": "nvmf_subsystem_add_listener", 00:14:22.003 "params": { 00:14:22.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.004 "listen_address": { 00:14:22.004 "trtype": "TCP", 00:14:22.004 "adrfam": "IPv4", 00:14:22.004 "traddr": "10.0.0.2", 00:14:22.004 "trsvcid": "4420" 00:14:22.004 }, 00:14:22.004 "secure_channel": false, 00:14:22.004 "sock_impl": "ssl" 00:14:22.004 } 00:14:22.004 } 00:14:22.004 ] 00:14:22.004 } 00:14:22.004 ] 00:14:22.004 }' 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74204 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74204 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74204 ']' 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.004 19:52:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.004 [2024-07-15 19:52:16.154894] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:22.004 [2024-07-15 19:52:16.154984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.263 [2024-07-15 19:52:16.292194] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.263 [2024-07-15 19:52:16.409791] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.263 [2024-07-15 19:52:16.409854] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.263 [2024-07-15 19:52:16.409883] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.263 [2024-07-15 19:52:16.409892] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.263 [2024-07-15 19:52:16.409899] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.263 [2024-07-15 19:52:16.409993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.524 [2024-07-15 19:52:16.578773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.524 [2024-07-15 19:52:16.657446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.524 [2024-07-15 19:52:16.689391] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:22.524 [2024-07-15 19:52:16.689640] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=74236 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 74236 /var/tmp/bdevperf.sock 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74236 ']' 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:23.093 19:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:14:23.093 "subsystems": [ 00:14:23.093 { 00:14:23.093 "subsystem": "keyring", 00:14:23.093 "config": [ 00:14:23.093 { 00:14:23.093 "method": "keyring_file_add_key", 00:14:23.093 "params": { 00:14:23.093 "name": "key0", 00:14:23.093 "path": "/tmp/tmp.X4p5aVIgFR" 00:14:23.093 } 00:14:23.093 } 00:14:23.093 ] 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "subsystem": "iobuf", 00:14:23.093 "config": [ 00:14:23.093 { 00:14:23.093 "method": "iobuf_set_options", 00:14:23.093 "params": { 00:14:23.093 "small_pool_count": 8192, 00:14:23.093 "large_pool_count": 1024, 00:14:23.093 "small_bufsize": 8192, 00:14:23.093 "large_bufsize": 135168 00:14:23.093 } 00:14:23.093 } 00:14:23.093 ] 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "subsystem": "sock", 00:14:23.093 "config": [ 00:14:23.093 { 00:14:23.093 "method": "sock_set_default_impl", 00:14:23.093 "params": { 00:14:23.093 "impl_name": "uring" 00:14:23.093 } 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "method": "sock_impl_set_options", 00:14:23.093 "params": { 00:14:23.093 "impl_name": "ssl", 00:14:23.093 "recv_buf_size": 4096, 00:14:23.093 "send_buf_size": 4096, 00:14:23.093 "enable_recv_pipe": true, 00:14:23.093 "enable_quickack": false, 00:14:23.093 "enable_placement_id": 0, 00:14:23.093 "enable_zerocopy_send_server": true, 00:14:23.093 "enable_zerocopy_send_client": false, 00:14:23.093 "zerocopy_threshold": 0, 00:14:23.093 "tls_version": 0, 00:14:23.093 "enable_ktls": false 00:14:23.093 } 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "method": "sock_impl_set_options", 00:14:23.093 "params": { 00:14:23.093 "impl_name": "posix", 00:14:23.093 "recv_buf_size": 2097152, 00:14:23.093 "send_buf_size": 2097152, 00:14:23.093 "enable_recv_pipe": true, 00:14:23.093 "enable_quickack": false, 00:14:23.093 "enable_placement_id": 0, 00:14:23.093 "enable_zerocopy_send_server": true, 00:14:23.093 "enable_zerocopy_send_client": false, 00:14:23.093 "zerocopy_threshold": 0, 00:14:23.093 "tls_version": 0, 00:14:23.093 "enable_ktls": false 00:14:23.093 } 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "method": "sock_impl_set_options", 00:14:23.093 "params": { 00:14:23.093 "impl_name": "uring", 00:14:23.093 "recv_buf_size": 2097152, 00:14:23.093 "send_buf_size": 2097152, 00:14:23.093 "enable_recv_pipe": true, 00:14:23.093 "enable_quickack": false, 00:14:23.093 "enable_placement_id": 0, 00:14:23.093 "enable_zerocopy_send_server": false, 00:14:23.093 "enable_zerocopy_send_client": false, 00:14:23.093 "zerocopy_threshold": 0, 00:14:23.093 "tls_version": 0, 00:14:23.093 "enable_ktls": false 00:14:23.093 } 00:14:23.093 } 00:14:23.093 ] 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "subsystem": "vmd", 00:14:23.093 "config": [] 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "subsystem": "accel", 00:14:23.093 "config": [ 00:14:23.093 { 00:14:23.093 "method": "accel_set_options", 00:14:23.093 "params": { 00:14:23.093 "small_cache_size": 128, 00:14:23.093 "large_cache_size": 16, 00:14:23.093 "task_count": 2048, 00:14:23.093 "sequence_count": 2048, 00:14:23.093 "buf_count": 2048 00:14:23.093 } 00:14:23.093 } 00:14:23.093 ] 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "subsystem": "bdev", 00:14:23.093 "config": [ 00:14:23.093 { 00:14:23.093 "method": "bdev_set_options", 00:14:23.093 "params": { 00:14:23.093 "bdev_io_pool_size": 65535, 00:14:23.093 "bdev_io_cache_size": 256, 00:14:23.093 "bdev_auto_examine": true, 00:14:23.093 "iobuf_small_cache_size": 128, 00:14:23.093 "iobuf_large_cache_size": 16 00:14:23.093 } 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "method": "bdev_raid_set_options", 00:14:23.093 "params": { 00:14:23.093 "process_window_size_kb": 1024 00:14:23.093 } 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "method": "bdev_iscsi_set_options", 00:14:23.093 "params": { 00:14:23.093 "timeout_sec": 30 00:14:23.093 } 00:14:23.093 }, 00:14:23.093 { 00:14:23.093 "method": "bdev_nvme_set_options", 00:14:23.093 "params": { 00:14:23.093 "action_on_timeout": "none", 00:14:23.093 "timeout_us": 0, 00:14:23.094 "timeout_admin_us": 0, 00:14:23.094 "keep_alive_timeout_ms": 10000, 00:14:23.094 "arbitration_burst": 0, 00:14:23.094 "low_priority_weight": 0, 00:14:23.094 "medium_priority_weight": 0, 00:14:23.094 "high_priority_weight": 0, 00:14:23.094 "nvme_adminq_poll_period_us": 10000, 00:14:23.094 "nvme_ioq_poll_period_us": 0, 00:14:23.094 "io_queue_requests": 512, 00:14:23.094 "delay_cmd_submit": true, 00:14:23.094 "transport_retry_count": 4, 00:14:23.094 "bdev_retry_count": 3, 00:14:23.094 "transport_ack_timeout": 0, 00:14:23.094 "ctrlr_loss_timeout_sec": 0, 00:14:23.094 "reconnect_delay_sec": 0, 00:14:23.094 "fast_io_fail_timeout_sec": 0, 00:14:23.094 "disable_auto_failback": false, 00:14:23.094 "generate_uuids": false, 00:14:23.094 "transport_tos": 0, 00:14:23.094 "nvme_error_stat": false, 00:14:23.094 "rdma_srq_size": 0, 00:14:23.094 "io_path_stat": false, 00:14:23.094 "allow_accel_sequence": false, 00:14:23.094 "rdma_max_cq_size": 0, 00:14:23.094 "rdma_cm_event_timeout_ms": 0, 00:14:23.094 "dhchap_digests": [ 00:14:23.094 "sha256", 00:14:23.094 "sha384", 00:14:23.094 "sha512" 00:14:23.094 ], 00:14:23.094 "dhchap_dhgroups": [ 00:14:23.094 "null", 00:14:23.094 "ffdhe2048", 00:14:23.094 "ffdhe3072", 00:14:23.094 "ffdhe4096", 00:14:23.094 "ffdhe6144", 00:14:23.094 "ffdhe8192" 00:14:23.094 ] 00:14:23.094 } 00:14:23.094 }, 00:14:23.094 { 00:14:23.094 "method": "bdev_nvme_attach_controller", 00:14:23.094 "params": { 00:14:23.094 "name": "nvme0", 00:14:23.094 "trtype": "TCP", 00:14:23.094 "adrfam": "IPv4", 00:14:23.094 "traddr": "10.0.0.2", 00:14:23.094 "trsvcid": "4420", 00:14:23.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.094 "prchk_reftag": false, 00:14:23.094 "prchk_guard": false, 00:14:23.094 "ctrlr_loss_timeout_sec": 0, 00:14:23.094 "reconnect_delay_sec": 0, 00:14:23.094 "fast_io_fail_timeout_sec": 0, 00:14:23.094 "psk": "key0", 00:14:23.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.094 "hdgst": false, 00:14:23.094 "ddgst": false 00:14:23.094 } 00:14:23.094 }, 00:14:23.094 { 00:14:23.094 "method": "bdev_nvme_set_hotplug", 00:14:23.094 "params": { 00:14:23.094 "period_us": 100000, 00:14:23.094 "enable": false 00:14:23.094 } 00:14:23.094 }, 00:14:23.094 { 00:14:23.094 "method": "bdev_enable_histogram", 00:14:23.094 "params": { 00:14:23.094 "name": "nvme0n1", 00:14:23.094 "enable": true 00:14:23.094 } 00:14:23.094 }, 00:14:23.094 { 00:14:23.094 "method": "bdev_wait_for_examine" 00:14:23.094 } 00:14:23.094 ] 00:14:23.094 }, 00:14:23.094 { 00:14:23.094 "subsystem": "nbd", 00:14:23.094 "config": [] 00:14:23.094 } 00:14:23.094 ] 00:14:23.094 }' 00:14:23.094 [2024-07-15 19:52:17.271887] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:23.094 [2024-07-15 19:52:17.272255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74236 ] 00:14:23.353 [2024-07-15 19:52:17.411312] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.353 [2024-07-15 19:52:17.541930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.612 [2024-07-15 19:52:17.679131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:23.612 [2024-07-15 19:52:17.728016] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.191 19:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.191 19:52:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:24.191 19:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:24.191 19:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:14:24.450 19:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.450 19:52:18 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.708 Running I/O for 1 seconds... 00:14:25.645 00:14:25.645 Latency(us) 00:14:25.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.645 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:25.645 Verification LBA range: start 0x0 length 0x2000 00:14:25.645 nvme0n1 : 1.03 3922.41 15.32 0.00 0.00 32134.89 6136.55 24784.52 00:14:25.645 =================================================================================================================== 00:14:25.645 Total : 3922.41 15.32 0.00 0.00 32134.89 6136.55 24784.52 00:14:25.645 0 00:14:25.645 19:52:19 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:14:25.645 19:52:19 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:14:25.645 19:52:19 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:25.645 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:25.645 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:25.645 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:25.645 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:25.645 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:25.645 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:25.646 nvmf_trace.0 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74236 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74236 ']' 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74236 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74236 00:14:25.646 killing process with pid 74236 00:14:25.646 Received shutdown signal, test time was about 1.000000 seconds 00:14:25.646 00:14:25.646 Latency(us) 00:14:25.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.646 =================================================================================================================== 00:14:25.646 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74236' 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74236 00:14:25.646 19:52:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74236 00:14:25.905 19:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:25.905 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.905 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.164 rmmod nvme_tcp 00:14:26.164 rmmod nvme_fabrics 00:14:26.164 rmmod nvme_keyring 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74204 ']' 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74204 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74204 ']' 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74204 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74204 00:14:26.164 killing process with pid 74204 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74204' 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74204 00:14:26.164 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74204 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ximMFOM3yW /tmp/tmp.orcLRN9PyB /tmp/tmp.X4p5aVIgFR 00:14:26.423 ************************************ 00:14:26.423 END TEST nvmf_tls 00:14:26.423 ************************************ 00:14:26.423 00:14:26.423 real 1m27.479s 00:14:26.423 user 2m16.514s 00:14:26.423 sys 0m29.703s 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.423 19:52:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.423 19:52:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:26.423 19:52:20 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:26.423 19:52:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:26.423 19:52:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.423 19:52:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:26.423 ************************************ 00:14:26.423 START TEST nvmf_fips 00:14:26.423 ************************************ 00:14:26.424 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:26.683 * Looking for test storage... 00:14:26.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:26.683 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:26.684 Error setting digest 00:14:26.684 00627DDCA87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:26.684 00627DDCA87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.684 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:26.943 Cannot find device "nvmf_tgt_br" 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.943 Cannot find device "nvmf_tgt_br2" 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:26.943 Cannot find device "nvmf_tgt_br" 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:26.943 Cannot find device "nvmf_tgt_br2" 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:26.943 19:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:26.943 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:27.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:27.202 00:14:27.202 --- 10.0.0.2 ping statistics --- 00:14:27.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.202 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:27.202 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:27.202 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:27.202 00:14:27.202 --- 10.0.0.3 ping statistics --- 00:14:27.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.202 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:27.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:27.202 00:14:27.202 --- 10.0.0.1 ping statistics --- 00:14:27.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.202 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74500 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74500 00:14:27.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74500 ']' 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.202 19:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:27.202 [2024-07-15 19:52:21.376195] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:27.202 [2024-07-15 19:52:21.376667] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.462 [2024-07-15 19:52:21.521312] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.462 [2024-07-15 19:52:21.682225] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.462 [2024-07-15 19:52:21.682596] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.462 [2024-07-15 19:52:21.682702] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.462 [2024-07-15 19:52:21.682729] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.462 [2024-07-15 19:52:21.682747] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.462 [2024-07-15 19:52:21.682810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.721 [2024-07-15 19:52:21.763144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:28.291 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.551 [2024-07-15 19:52:22.710871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.551 [2024-07-15 19:52:22.726805] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:28.551 [2024-07-15 19:52:22.727050] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.551 [2024-07-15 19:52:22.761954] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:28.551 malloc0 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74538 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74538 /var/tmp/bdevperf.sock 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74538 ']' 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.551 19:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:28.810 [2024-07-15 19:52:22.874756] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:28.810 [2024-07-15 19:52:22.875071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74538 ] 00:14:28.810 [2024-07-15 19:52:23.017011] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.069 [2024-07-15 19:52:23.140483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.069 [2024-07-15 19:52:23.196168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:29.637 19:52:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.637 19:52:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:29.637 19:52:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:29.895 [2024-07-15 19:52:24.115465] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:29.895 [2024-07-15 19:52:24.115592] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:30.155 TLSTESTn1 00:14:30.155 19:52:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:30.155 Running I/O for 10 seconds... 00:14:40.130 00:14:40.130 Latency(us) 00:14:40.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.130 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:40.130 Verification LBA range: start 0x0 length 0x2000 00:14:40.130 TLSTESTn1 : 10.02 3864.23 15.09 0.00 0.00 33061.44 7328.12 28835.84 00:14:40.130 =================================================================================================================== 00:14:40.130 Total : 3864.23 15.09 0.00 0.00 33061.44 7328.12 28835.84 00:14:40.130 0 00:14:40.130 19:52:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:40.130 19:52:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:40.130 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:14:40.130 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:14:40.130 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:40.130 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:40.389 nvmf_trace.0 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74538 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74538 ']' 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74538 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74538 00:14:40.389 killing process with pid 74538 00:14:40.389 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.389 00:14:40.389 Latency(us) 00:14:40.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.389 =================================================================================================================== 00:14:40.389 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74538' 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74538 00:14:40.389 [2024-07-15 19:52:34.497969] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:40.389 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74538 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.648 rmmod nvme_tcp 00:14:40.648 rmmod nvme_fabrics 00:14:40.648 rmmod nvme_keyring 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74500 ']' 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74500 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74500 ']' 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74500 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74500 00:14:40.648 killing process with pid 74500 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74500' 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74500 00:14:40.648 [2024-07-15 19:52:34.846850] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:40.648 19:52:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74500 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:40.907 ************************************ 00:14:40.907 END TEST nvmf_fips 00:14:40.907 ************************************ 00:14:40.907 00:14:40.907 real 0m14.518s 00:14:40.907 user 0m19.689s 00:14:40.907 sys 0m5.911s 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.907 19:52:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:41.166 19:52:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:41.166 19:52:35 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:14:41.166 19:52:35 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:14:41.166 19:52:35 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:41.166 19:52:35 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.166 19:52:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.166 19:52:35 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:41.166 19:52:35 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:41.166 19:52:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.166 19:52:35 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:41.166 19:52:35 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:41.166 19:52:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:41.166 19:52:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.166 19:52:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.166 ************************************ 00:14:41.166 START TEST nvmf_identify 00:14:41.166 ************************************ 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:41.166 * Looking for test storage... 00:14:41.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:41.166 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:41.166 Cannot find device "nvmf_tgt_br" 00:14:41.167 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:41.167 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.167 Cannot find device "nvmf_tgt_br2" 00:14:41.167 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:41.167 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:41.167 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:41.167 Cannot find device "nvmf_tgt_br" 00:14:41.167 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:41.167 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:41.167 Cannot find device "nvmf_tgt_br2" 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:41.425 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:41.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:14:41.426 00:14:41.426 --- 10.0.0.2 ping statistics --- 00:14:41.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.426 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:41.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:14:41.426 00:14:41.426 --- 10.0.0.3 ping statistics --- 00:14:41.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.426 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:41.426 00:14:41.426 --- 10.0.0.1 ping statistics --- 00:14:41.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.426 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.426 19:52:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74886 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74886 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74886 ']' 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.684 19:52:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:41.684 [2024-07-15 19:52:35.747509] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:41.684 [2024-07-15 19:52:35.747635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.684 [2024-07-15 19:52:35.890325] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.943 [2024-07-15 19:52:36.008038] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.943 [2024-07-15 19:52:36.008114] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.943 [2024-07-15 19:52:36.008141] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.943 [2024-07-15 19:52:36.008151] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.943 [2024-07-15 19:52:36.008160] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.943 [2024-07-15 19:52:36.009010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.943 [2024-07-15 19:52:36.009380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.943 [2024-07-15 19:52:36.009216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.943 [2024-07-15 19:52:36.009373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.943 [2024-07-15 19:52:36.067536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.581 [2024-07-15 19:52:36.704858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.581 Malloc0 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.581 [2024-07-15 19:52:36.820160] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.581 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.842 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.842 19:52:36 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:42.842 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.842 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.842 [ 00:14:42.842 { 00:14:42.842 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:42.842 "subtype": "Discovery", 00:14:42.842 "listen_addresses": [ 00:14:42.842 { 00:14:42.842 "trtype": "TCP", 00:14:42.842 "adrfam": "IPv4", 00:14:42.842 "traddr": "10.0.0.2", 00:14:42.842 "trsvcid": "4420" 00:14:42.842 } 00:14:42.842 ], 00:14:42.842 "allow_any_host": true, 00:14:42.842 "hosts": [] 00:14:42.842 }, 00:14:42.842 { 00:14:42.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.842 "subtype": "NVMe", 00:14:42.842 "listen_addresses": [ 00:14:42.842 { 00:14:42.842 "trtype": "TCP", 00:14:42.842 "adrfam": "IPv4", 00:14:42.842 "traddr": "10.0.0.2", 00:14:42.842 "trsvcid": "4420" 00:14:42.842 } 00:14:42.842 ], 00:14:42.842 "allow_any_host": true, 00:14:42.842 "hosts": [], 00:14:42.842 "serial_number": "SPDK00000000000001", 00:14:42.842 "model_number": "SPDK bdev Controller", 00:14:42.842 "max_namespaces": 32, 00:14:42.842 "min_cntlid": 1, 00:14:42.842 "max_cntlid": 65519, 00:14:42.842 "namespaces": [ 00:14:42.842 { 00:14:42.842 "nsid": 1, 00:14:42.842 "bdev_name": "Malloc0", 00:14:42.842 "name": "Malloc0", 00:14:42.842 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:42.842 "eui64": "ABCDEF0123456789", 00:14:42.842 "uuid": "288ac6ca-5275-48d1-a982-22d6242392ab" 00:14:42.842 } 00:14:42.842 ] 00:14:42.842 } 00:14:42.842 ] 00:14:42.842 19:52:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.842 19:52:36 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:42.842 [2024-07-15 19:52:36.870497] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:42.842 [2024-07-15 19:52:36.870558] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74921 ] 00:14:42.842 [2024-07-15 19:52:37.006010] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:42.843 [2024-07-15 19:52:37.006102] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:42.843 [2024-07-15 19:52:37.006109] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:42.843 [2024-07-15 19:52:37.006123] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:42.843 [2024-07-15 19:52:37.006133] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:42.843 [2024-07-15 19:52:37.006298] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:42.843 [2024-07-15 19:52:37.006384] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12d8510 0 00:14:42.843 [2024-07-15 19:52:37.011329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:42.843 [2024-07-15 19:52:37.011352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:42.843 [2024-07-15 19:52:37.011374] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:42.843 [2024-07-15 19:52:37.011394] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:42.843 [2024-07-15 19:52:37.011442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.011450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.011455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.011470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:42.843 [2024-07-15 19:52:37.011499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.019313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.019356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.019361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.019396] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:42.843 [2024-07-15 19:52:37.019404] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:42.843 [2024-07-15 19:52:37.019410] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:42.843 [2024-07-15 19:52:37.019431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.019450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.843 [2024-07-15 19:52:37.019476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.019539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.019546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.019549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.019560] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:42.843 [2024-07-15 19:52:37.019567] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:42.843 [2024-07-15 19:52:37.019575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.019591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.843 [2024-07-15 19:52:37.019625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.019669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.019676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.019680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.019691] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:42.843 [2024-07-15 19:52:37.019700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:42.843 [2024-07-15 19:52:37.019707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019716] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.019723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.843 [2024-07-15 19:52:37.019741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.019786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.019793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.019796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.019807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:42.843 [2024-07-15 19:52:37.019817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.019833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.843 [2024-07-15 19:52:37.019850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.019894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.019902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.019905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.019909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.019915] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:42.843 [2024-07-15 19:52:37.019920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:42.843 [2024-07-15 19:52:37.019928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:42.843 [2024-07-15 19:52:37.020034] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:42.843 [2024-07-15 19:52:37.020040] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:42.843 [2024-07-15 19:52:37.020050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.843 [2024-07-15 19:52:37.020084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.020134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.020142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.020145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.020155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:42.843 [2024-07-15 19:52:37.020165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020174] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.843 [2024-07-15 19:52:37.020198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.020248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.020255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.020258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.020268] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:42.843 [2024-07-15 19:52:37.020273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:42.843 [2024-07-15 19:52:37.020281] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:42.843 [2024-07-15 19:52:37.020292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:42.843 [2024-07-15 19:52:37.020303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.843 [2024-07-15 19:52:37.020350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.020436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:42.843 [2024-07-15 19:52:37.020444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:42.843 [2024-07-15 19:52:37.020448] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020452] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8510): datao=0, datal=4096, cccid=0 00:14:42.843 [2024-07-15 19:52:37.020457] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133af00) on tqpair(0x12d8510): expected_datao=0, payload_size=4096 00:14:42.843 [2024-07-15 19:52:37.020462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020471] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020476] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.020491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.020495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.020508] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:42.843 [2024-07-15 19:52:37.020514] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:42.843 [2024-07-15 19:52:37.020519] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:42.843 [2024-07-15 19:52:37.020525] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:42.843 [2024-07-15 19:52:37.020530] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:42.843 [2024-07-15 19:52:37.020536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:42.843 [2024-07-15 19:52:37.020551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:42.843 [2024-07-15 19:52:37.020560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:42.843 [2024-07-15 19:52:37.020597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.020653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.020660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.020664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.020677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.843 [2024-07-15 19:52:37.020700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.843 [2024-07-15 19:52:37.020721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.843 [2024-07-15 19:52:37.020741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.843 [2024-07-15 19:52:37.020760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:42.843 [2024-07-15 19:52:37.020769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:42.843 [2024-07-15 19:52:37.020788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.843 [2024-07-15 19:52:37.020829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133af00, cid 0, qid 0 00:14:42.843 [2024-07-15 19:52:37.020836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b080, cid 1, qid 0 00:14:42.843 [2024-07-15 19:52:37.020841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b200, cid 2, qid 0 00:14:42.843 [2024-07-15 19:52:37.020846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.843 [2024-07-15 19:52:37.020851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b500, cid 4, qid 0 00:14:42.843 [2024-07-15 19:52:37.020942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.020949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.020952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b500) on tqpair=0x12d8510 00:14:42.843 [2024-07-15 19:52:37.020967] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:42.843 [2024-07-15 19:52:37.020973] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:42.843 [2024-07-15 19:52:37.020986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.020991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8510) 00:14:42.843 [2024-07-15 19:52:37.020998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.843 [2024-07-15 19:52:37.021017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b500, cid 4, qid 0 00:14:42.843 [2024-07-15 19:52:37.021074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:42.843 [2024-07-15 19:52:37.021081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:42.843 [2024-07-15 19:52:37.021085] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.021089] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8510): datao=0, datal=4096, cccid=4 00:14:42.843 [2024-07-15 19:52:37.021094] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133b500) on tqpair(0x12d8510): expected_datao=0, payload_size=4096 00:14:42.843 [2024-07-15 19:52:37.021098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.021106] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.021113] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.021122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.843 [2024-07-15 19:52:37.021128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.843 [2024-07-15 19:52:37.021132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.843 [2024-07-15 19:52:37.021136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b500) on tqpair=0x12d8510 00:14:42.844 [2024-07-15 19:52:37.021150] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:42.844 [2024-07-15 19:52:37.021189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8510) 00:14:42.844 [2024-07-15 19:52:37.021204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.844 [2024-07-15 19:52:37.021212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12d8510) 00:14:42.844 [2024-07-15 19:52:37.021227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.844 [2024-07-15 19:52:37.021252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b500, cid 4, qid 0 00:14:42.844 [2024-07-15 19:52:37.021260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b680, cid 5, qid 0 00:14:42.844 [2024-07-15 19:52:37.021380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:42.844 [2024-07-15 19:52:37.021388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:42.844 [2024-07-15 19:52:37.021392] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021396] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8510): datao=0, datal=1024, cccid=4 00:14:42.844 [2024-07-15 19:52:37.021401] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133b500) on tqpair(0x12d8510): expected_datao=0, payload_size=1024 00:14:42.844 [2024-07-15 19:52:37.021405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021412] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021416] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.844 [2024-07-15 19:52:37.021429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.844 [2024-07-15 19:52:37.021432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b680) on tqpair=0x12d8510 00:14:42.844 [2024-07-15 19:52:37.021456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.844 [2024-07-15 19:52:37.021463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.844 [2024-07-15 19:52:37.021467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b500) on tqpair=0x12d8510 00:14:42.844 [2024-07-15 19:52:37.021485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8510) 00:14:42.844 [2024-07-15 19:52:37.021498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.844 [2024-07-15 19:52:37.021523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b500, cid 4, qid 0 00:14:42.844 [2024-07-15 19:52:37.021591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:42.844 [2024-07-15 19:52:37.021598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:42.844 [2024-07-15 19:52:37.021602] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021606] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8510): datao=0, datal=3072, cccid=4 00:14:42.844 [2024-07-15 19:52:37.021611] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133b500) on tqpair(0x12d8510): expected_datao=0, payload_size=3072 00:14:42.844 [2024-07-15 19:52:37.021616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021623] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021627] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.844 [2024-07-15 19:52:37.021641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.844 [2024-07-15 19:52:37.021645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b500) on tqpair=0x12d8510 00:14:42.844 [2024-07-15 19:52:37.021660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8510) 00:14:42.844 [2024-07-15 19:52:37.021673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.844 [2024-07-15 19:52:37.021697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b500, cid 4, qid 0 00:14:42.844 [2024-07-15 19:52:37.021761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:42.844 [2024-07-15 19:52:37.021768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:42.844 [2024-07-15 19:52:37.021772] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021776] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8510): datao=0, datal=8, cccid=4 00:14:42.844 [2024-07-15 19:52:37.021781] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133b500) on tqpair(0x12d8510): expected_datao=0, payload_size=8 00:14:42.844 [2024-07-15 19:52:37.021786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021793] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021797] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.844 ===================================================== 00:14:42.844 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:42.844 ===================================================== 00:14:42.844 Controller Capabilities/Features 00:14:42.844 ================================ 00:14:42.844 Vendor ID: 0000 00:14:42.844 Subsystem Vendor ID: 0000 00:14:42.844 Serial Number: .................... 00:14:42.844 Model Number: ........................................ 00:14:42.844 Firmware Version: 24.09 00:14:42.844 Recommended Arb Burst: 0 00:14:42.844 IEEE OUI Identifier: 00 00 00 00:14:42.844 Multi-path I/O 00:14:42.844 May have multiple subsystem ports: No 00:14:42.844 May have multiple controllers: No 00:14:42.844 Associated with SR-IOV VF: No 00:14:42.844 Max Data Transfer Size: 131072 00:14:42.844 Max Number of Namespaces: 0 00:14:42.844 Max Number of I/O Queues: 1024 00:14:42.844 NVMe Specification Version (VS): 1.3 00:14:42.844 NVMe Specification Version (Identify): 1.3 00:14:42.844 Maximum Queue Entries: 128 00:14:42.844 Contiguous Queues Required: Yes 00:14:42.844 Arbitration Mechanisms Supported 00:14:42.844 Weighted Round Robin: Not Supported 00:14:42.844 Vendor Specific: Not Supported 00:14:42.844 Reset Timeout: 15000 ms 00:14:42.844 Doorbell Stride: 4 bytes 00:14:42.844 NVM Subsystem Reset: Not Supported 00:14:42.844 Command Sets Supported 00:14:42.844 NVM Command Set: Supported 00:14:42.844 Boot Partition: Not Supported 00:14:42.844 Memory Page Size Minimum: 4096 bytes 00:14:42.844 Memory Page Size Maximum: 4096 bytes 00:14:42.844 Persistent Memory Region: Not Supported 00:14:42.844 Optional Asynchronous Events Supported 00:14:42.844 Namespace Attribute Notices: Not Supported 00:14:42.844 Firmware Activation Notices: Not Supported 00:14:42.844 ANA Change Notices: Not Supported 00:14:42.844 PLE Aggregate Log Change Notices: Not Supported 00:14:42.844 LBA Status Info Alert Notices: Not Supported 00:14:42.844 EGE Aggregate Log Change Notices: Not Supported 00:14:42.844 Normal NVM Subsystem Shutdown event: Not Supported 00:14:42.844 Zone Descriptor Change Notices: Not Supported 00:14:42.844 Discovery Log Change Notices: Supported 00:14:42.844 Controller Attributes 00:14:42.844 128-bit Host Identifier: Not Supported 00:14:42.844 Non-Operational Permissive Mode: Not Supported 00:14:42.844 NVM Sets: Not Supported 00:14:42.844 Read Recovery Levels: Not Supported 00:14:42.844 Endurance Groups: Not Supported 00:14:42.844 Predictable Latency Mode: Not Supported 00:14:42.844 Traffic Based Keep ALive: Not Supported 00:14:42.844 Namespace Granularity: Not Supported 00:14:42.844 SQ Associations: Not Supported 00:14:42.844 UUID List: Not Supported 00:14:42.844 Multi-Domain Subsystem: Not Supported 00:14:42.844 Fixed Capacity Management: Not Supported 00:14:42.844 Variable Capacity Management: Not Supported 00:14:42.844 Delete Endurance Group: Not Supported 00:14:42.844 Delete NVM Set: Not Supported 00:14:42.844 Extended LBA Formats Supported: Not Supported 00:14:42.844 Flexible Data Placement Supported: Not Supported 00:14:42.844 00:14:42.844 Controller Memory Buffer Support 00:14:42.844 ================================ 00:14:42.844 Supported: No 00:14:42.844 00:14:42.844 Persistent Memory Region Support 00:14:42.844 ================================ 00:14:42.844 Supported: No 00:14:42.844 00:14:42.844 Admin Command Set Attributes 00:14:42.844 ============================ 00:14:42.844 Security Send/Receive: Not Supported 00:14:42.844 Format NVM: Not Supported 00:14:42.844 Firmware Activate/Download: Not Supported 00:14:42.844 Namespace Management: Not Supported 00:14:42.844 Device Self-Test: Not Supported 00:14:42.844 Directives: Not Supported 00:14:42.844 NVMe-MI: Not Supported 00:14:42.844 Virtualization Management: Not Supported 00:14:42.844 Doorbell Buffer Config: Not Supported 00:14:42.844 Get LBA Status Capability: Not Supported 00:14:42.844 Command & Feature Lockdown Capability: Not Supported 00:14:42.844 Abort Command Limit: 1 00:14:42.844 Async Event Request Limit: 4 00:14:42.844 Number of Firmware Slots: N/A 00:14:42.844 Firmware Slot 1 Read-Only: N/A 00:14:42.844 Firmware Activation Without Reset: N/A 00:14:42.844 Multiple Update Detection Support: N/A 00:14:42.844 Firmware Update Granularity: No Information Provided 00:14:42.844 Per-Namespace SMART Log: No 00:14:42.844 Asymmetric Namespace Access Log Page: Not Supported 00:14:42.844 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:42.844 Command Effects Log Page: Not Supported 00:14:42.844 Get Log Page Extended Data: Supported 00:14:42.844 Telemetry Log Pages: Not Supported 00:14:42.844 Persistent Event Log Pages: Not Supported 00:14:42.844 Supported Log Pages Log Page: May Support 00:14:42.844 Commands Supported & Effects Log Page: Not Supported 00:14:42.844 Feature Identifiers & Effects Log Page:May Support 00:14:42.844 NVMe-MI Commands & Effects Log Page: May Support 00:14:42.844 Data Area 4 for Telemetry Log: Not Supported 00:14:42.844 Error Log Page Entries Supported: 128 00:14:42.844 Keep Alive: Not Supported 00:14:42.844 00:14:42.844 NVM Command Set Attributes 00:14:42.844 ========================== 00:14:42.844 Submission Queue Entry Size 00:14:42.844 Max: 1 00:14:42.844 Min: 1 00:14:42.844 Completion Queue Entry Size 00:14:42.844 Max: 1 00:14:42.844 Min: 1 00:14:42.844 Number of Namespaces: 0 00:14:42.844 Compare Command: Not Supported 00:14:42.844 Write Uncorrectable Command: Not Supported 00:14:42.844 Dataset Management Command: Not Supported 00:14:42.844 Write Zeroes Command: Not Supported 00:14:42.844 Set Features Save Field: Not Supported 00:14:42.844 Reservations: Not Supported 00:14:42.844 Timestamp: Not Supported 00:14:42.844 Copy: Not Supported 00:14:42.844 Volatile Write Cache: Not Present 00:14:42.844 Atomic Write Unit (Normal): 1 00:14:42.844 Atomic Write Unit (PFail): 1 00:14:42.844 Atomic Compare & Write Unit: 1 00:14:42.844 Fused Compare & Write: Supported 00:14:42.844 Scatter-Gather List 00:14:42.844 SGL Command Set: Supported 00:14:42.844 SGL Keyed: Supported 00:14:42.844 SGL Bit Bucket Descriptor: Not Supported 00:14:42.844 SGL Metadata Pointer: Not Supported 00:14:42.844 Oversized SGL: Not Supported 00:14:42.844 SGL Metadata Address: Not Supported 00:14:42.844 SGL Offset: Supported 00:14:42.844 Transport SGL Data Block: Not Supported 00:14:42.844 Replay Protected Memory Block: Not Supported 00:14:42.844 00:14:42.844 Firmware Slot Information 00:14:42.844 ========================= 00:14:42.844 Active slot: 0 00:14:42.844 00:14:42.844 00:14:42.844 Error Log 00:14:42.844 ========= 00:14:42.844 00:14:42.844 Active Namespaces 00:14:42.844 ================= 00:14:42.844 Discovery Log Page 00:14:42.844 ================== 00:14:42.844 Generation Counter: 2 00:14:42.844 Number of Records: 2 00:14:42.844 Record Format: 0 00:14:42.844 00:14:42.844 Discovery Log Entry 0 00:14:42.844 ---------------------- 00:14:42.844 Transport Type: 3 (TCP) 00:14:42.844 Address Family: 1 (IPv4) 00:14:42.844 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:42.844 Entry Flags: 00:14:42.844 Duplicate Returned Information: 1 00:14:42.844 Explicit Persistent Connection Support for Discovery: 1 00:14:42.844 Transport Requirements: 00:14:42.844 Secure Channel: Not Required 00:14:42.844 Port ID: 0 (0x0000) 00:14:42.844 Controller ID: 65535 (0xffff) 00:14:42.844 Admin Max SQ Size: 128 00:14:42.844 Transport Service Identifier: 4420 00:14:42.844 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:42.844 Transport Address: 10.0.0.2 00:14:42.844 Discovery Log Entry 1 00:14:42.844 ---------------------- 00:14:42.844 Transport Type: 3 (TCP) 00:14:42.844 Address Family: 1 (IPv4) 00:14:42.844 Subsystem Type: 2 (NVM Subsystem) 00:14:42.844 Entry Flags: 00:14:42.844 Duplicate Returned Information: 0 00:14:42.844 Explicit Persistent Connection Support for Discovery: 0 00:14:42.844 Transport Requirements: 00:14:42.844 Secure Channel: Not Required 00:14:42.844 Port ID: 0 (0x0000) 00:14:42.844 Controller ID: 65535 (0xffff) 00:14:42.844 Admin Max SQ Size: 128 00:14:42.844 Transport Service Identifier: 4420 00:14:42.844 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:42.844 Transport Address: 10.0.0.2 [2024-07-15 19:52:37.021820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.844 [2024-07-15 19:52:37.021824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b500) on tqpair=0x12d8510 00:14:42.844 [2024-07-15 19:52:37.021924] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:42.844 [2024-07-15 19:52:37.021938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133af00) on tqpair=0x12d8510 00:14:42.844 [2024-07-15 19:52:37.021945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.844 [2024-07-15 19:52:37.021951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b080) on tqpair=0x12d8510 00:14:42.844 [2024-07-15 19:52:37.021956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.844 [2024-07-15 19:52:37.021961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b200) on tqpair=0x12d8510 00:14:42.844 [2024-07-15 19:52:37.021966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.844 [2024-07-15 19:52:37.021972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.844 [2024-07-15 19:52:37.021976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.844 [2024-07-15 19:52:37.021986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.844 [2024-07-15 19:52:37.021994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.844 [2024-07-15 19:52:37.022002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.844 [2024-07-15 19:52:37.022025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.022075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.022082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.022086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.022098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.022114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.022135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.022204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.022211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.022215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.022229] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:42.845 [2024-07-15 19:52:37.022234] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:42.845 [2024-07-15 19:52:37.022245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.022261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.022294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.022344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.022351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.022355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.022371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.022387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.022405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.022447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.022454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.022458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.022472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.022488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.022505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.022550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.022556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.022560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.022575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.022591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.022608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.022656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.022663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.022667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.022681] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.022698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.022714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.022765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.022777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.022782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.022797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.022813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.022832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.022876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.022883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.022887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.022902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.022918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.022935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.022983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.022990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.022994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.022998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.023008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.023013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.023017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.023024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.023041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.023086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.023094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.023098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.023102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.023113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.023118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.023122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.023129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.023146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.023189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.023196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.023199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.023203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.023214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.023219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.023223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.023230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.023247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.027331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.027348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.027353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.027358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.027371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.027377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.027381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8510) 00:14:42.845 [2024-07-15 19:52:37.027390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.845 [2024-07-15 19:52:37.027415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133b380, cid 3, qid 0 00:14:42.845 [2024-07-15 19:52:37.027465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:42.845 [2024-07-15 19:52:37.027472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:42.845 [2024-07-15 19:52:37.027476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:42.845 [2024-07-15 19:52:37.027480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133b380) on tqpair=0x12d8510 00:14:42.845 [2024-07-15 19:52:37.027489] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:42.845 00:14:42.845 19:52:37 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:42.845 [2024-07-15 19:52:37.068457] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:42.845 [2024-07-15 19:52:37.068515] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74923 ] 00:14:43.109 [2024-07-15 19:52:37.205011] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:43.109 [2024-07-15 19:52:37.205130] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:43.109 [2024-07-15 19:52:37.205137] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:43.109 [2024-07-15 19:52:37.205150] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:43.109 [2024-07-15 19:52:37.205161] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:43.109 [2024-07-15 19:52:37.205345] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:43.109 [2024-07-15 19:52:37.205415] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x592510 0 00:14:43.109 [2024-07-15 19:52:37.210319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:43.109 [2024-07-15 19:52:37.210342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:43.109 [2024-07-15 19:52:37.210365] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:43.109 [2024-07-15 19:52:37.210369] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:43.109 [2024-07-15 19:52:37.210425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.210431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.210436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.109 [2024-07-15 19:52:37.210450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:43.109 [2024-07-15 19:52:37.210480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.109 [2024-07-15 19:52:37.224295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.109 [2024-07-15 19:52:37.224316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.109 [2024-07-15 19:52:37.224338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.109 [2024-07-15 19:52:37.224354] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:43.109 [2024-07-15 19:52:37.224362] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:43.109 [2024-07-15 19:52:37.224368] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:43.109 [2024-07-15 19:52:37.224389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.109 [2024-07-15 19:52:37.224407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.109 [2024-07-15 19:52:37.224433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.109 [2024-07-15 19:52:37.224486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.109 [2024-07-15 19:52:37.224493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.109 [2024-07-15 19:52:37.224497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.109 [2024-07-15 19:52:37.224506] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:43.109 [2024-07-15 19:52:37.224514] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:43.109 [2024-07-15 19:52:37.224522] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.109 [2024-07-15 19:52:37.224536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.109 [2024-07-15 19:52:37.224553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.109 [2024-07-15 19:52:37.224616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.109 [2024-07-15 19:52:37.224622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.109 [2024-07-15 19:52:37.224626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.109 [2024-07-15 19:52:37.224636] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:43.109 [2024-07-15 19:52:37.224645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:43.109 [2024-07-15 19:52:37.224653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.109 [2024-07-15 19:52:37.224668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.109 [2024-07-15 19:52:37.224684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.109 [2024-07-15 19:52:37.224731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.109 [2024-07-15 19:52:37.224738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.109 [2024-07-15 19:52:37.224741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.109 [2024-07-15 19:52:37.224751] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:43.109 [2024-07-15 19:52:37.224762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224770] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.109 [2024-07-15 19:52:37.224807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.109 [2024-07-15 19:52:37.224826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.109 [2024-07-15 19:52:37.224881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.109 [2024-07-15 19:52:37.224888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.109 [2024-07-15 19:52:37.224891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.224896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.109 [2024-07-15 19:52:37.224901] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:43.109 [2024-07-15 19:52:37.224906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:43.109 [2024-07-15 19:52:37.224915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:43.109 [2024-07-15 19:52:37.225021] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:43.109 [2024-07-15 19:52:37.225026] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:43.109 [2024-07-15 19:52:37.225036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.225040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.225044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.109 [2024-07-15 19:52:37.225051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.109 [2024-07-15 19:52:37.225070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.109 [2024-07-15 19:52:37.225146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.109 [2024-07-15 19:52:37.225154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.109 [2024-07-15 19:52:37.225158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.225163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.109 [2024-07-15 19:52:37.225168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:43.109 [2024-07-15 19:52:37.225179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.225184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.225188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.109 [2024-07-15 19:52:37.225195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.109 [2024-07-15 19:52:37.225213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.109 [2024-07-15 19:52:37.225261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.109 [2024-07-15 19:52:37.225268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.109 [2024-07-15 19:52:37.225271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.225276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.109 [2024-07-15 19:52:37.225281] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:43.109 [2024-07-15 19:52:37.225286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:43.109 [2024-07-15 19:52:37.225295] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:43.109 [2024-07-15 19:52:37.225323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:43.109 [2024-07-15 19:52:37.225336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.109 [2024-07-15 19:52:37.225341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.225349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.110 [2024-07-15 19:52:37.225369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.110 [2024-07-15 19:52:37.225461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.110 [2024-07-15 19:52:37.225468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.110 [2024-07-15 19:52:37.225472] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225477] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x592510): datao=0, datal=4096, cccid=0 00:14:43.110 [2024-07-15 19:52:37.225482] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5f4f00) on tqpair(0x592510): expected_datao=0, payload_size=4096 00:14:43.110 [2024-07-15 19:52:37.225487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225496] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225500] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.110 [2024-07-15 19:52:37.225515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.110 [2024-07-15 19:52:37.225519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.110 [2024-07-15 19:52:37.225532] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:43.110 [2024-07-15 19:52:37.225537] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:43.110 [2024-07-15 19:52:37.225542] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:43.110 [2024-07-15 19:52:37.225548] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:43.110 [2024-07-15 19:52:37.225553] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:43.110 [2024-07-15 19:52:37.225558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.225573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.225581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.225598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:43.110 [2024-07-15 19:52:37.225617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.110 [2024-07-15 19:52:37.225666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.110 [2024-07-15 19:52:37.225673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.110 [2024-07-15 19:52:37.225676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225681] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.110 [2024-07-15 19:52:37.225689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.225704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.110 [2024-07-15 19:52:37.225711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225715] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.225725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.110 [2024-07-15 19:52:37.225737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.225751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.110 [2024-07-15 19:52:37.225758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.225772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.110 [2024-07-15 19:52:37.225777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.225794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.225802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.225813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.110 [2024-07-15 19:52:37.225832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f4f00, cid 0, qid 0 00:14:43.110 [2024-07-15 19:52:37.225839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5080, cid 1, qid 0 00:14:43.110 [2024-07-15 19:52:37.225844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5200, cid 2, qid 0 00:14:43.110 [2024-07-15 19:52:37.225849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.110 [2024-07-15 19:52:37.225854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5500, cid 4, qid 0 00:14:43.110 [2024-07-15 19:52:37.225940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.110 [2024-07-15 19:52:37.225946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.110 [2024-07-15 19:52:37.225950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5500) on tqpair=0x592510 00:14:43.110 [2024-07-15 19:52:37.225964] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:43.110 [2024-07-15 19:52:37.225970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.225979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.225986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.225993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.225998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.226009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:43.110 [2024-07-15 19:52:37.226027] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5500, cid 4, qid 0 00:14:43.110 [2024-07-15 19:52:37.226084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.110 [2024-07-15 19:52:37.226091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.110 [2024-07-15 19:52:37.226095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5500) on tqpair=0x592510 00:14:43.110 [2024-07-15 19:52:37.226165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.226177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.226186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.226198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.110 [2024-07-15 19:52:37.226217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5500, cid 4, qid 0 00:14:43.110 [2024-07-15 19:52:37.226292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.110 [2024-07-15 19:52:37.226300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.110 [2024-07-15 19:52:37.226304] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226308] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x592510): datao=0, datal=4096, cccid=4 00:14:43.110 [2024-07-15 19:52:37.226313] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5f5500) on tqpair(0x592510): expected_datao=0, payload_size=4096 00:14:43.110 [2024-07-15 19:52:37.226318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226326] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226337] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.110 [2024-07-15 19:52:37.226352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.110 [2024-07-15 19:52:37.226355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5500) on tqpair=0x592510 00:14:43.110 [2024-07-15 19:52:37.226371] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:43.110 [2024-07-15 19:52:37.226384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.226395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:43.110 [2024-07-15 19:52:37.226403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x592510) 00:14:43.110 [2024-07-15 19:52:37.226415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.110 [2024-07-15 19:52:37.226436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5500, cid 4, qid 0 00:14:43.110 [2024-07-15 19:52:37.226508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.110 [2024-07-15 19:52:37.226515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.110 [2024-07-15 19:52:37.226519] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226523] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x592510): datao=0, datal=4096, cccid=4 00:14:43.110 [2024-07-15 19:52:37.226528] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5f5500) on tqpair(0x592510): expected_datao=0, payload_size=4096 00:14:43.110 [2024-07-15 19:52:37.226532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226540] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226544] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.110 [2024-07-15 19:52:37.226552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.111 [2024-07-15 19:52:37.226558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.111 [2024-07-15 19:52:37.226562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5500) on tqpair=0x592510 00:14:43.111 [2024-07-15 19:52:37.226582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:43.111 [2024-07-15 19:52:37.226594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:43.111 [2024-07-15 19:52:37.226603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.226614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.111 [2024-07-15 19:52:37.226633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5500, cid 4, qid 0 00:14:43.111 [2024-07-15 19:52:37.226691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.111 [2024-07-15 19:52:37.226698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.111 [2024-07-15 19:52:37.226701] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226705] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x592510): datao=0, datal=4096, cccid=4 00:14:43.111 [2024-07-15 19:52:37.226710] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5f5500) on tqpair(0x592510): expected_datao=0, payload_size=4096 00:14:43.111 [2024-07-15 19:52:37.226715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226722] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226726] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.111 [2024-07-15 19:52:37.226740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.111 [2024-07-15 19:52:37.226744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5500) on tqpair=0x592510 00:14:43.111 [2024-07-15 19:52:37.226757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:43.111 [2024-07-15 19:52:37.226766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:43.111 [2024-07-15 19:52:37.226776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:43.111 [2024-07-15 19:52:37.226784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:43.111 [2024-07-15 19:52:37.226789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:43.111 [2024-07-15 19:52:37.226795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:43.111 [2024-07-15 19:52:37.226801] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:43.111 [2024-07-15 19:52:37.226805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:43.111 [2024-07-15 19:52:37.226811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:43.111 [2024-07-15 19:52:37.226829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.226841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.111 [2024-07-15 19:52:37.226849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.226864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.111 [2024-07-15 19:52:37.226889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5500, cid 4, qid 0 00:14:43.111 [2024-07-15 19:52:37.226897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5680, cid 5, qid 0 00:14:43.111 [2024-07-15 19:52:37.226957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.111 [2024-07-15 19:52:37.226963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.111 [2024-07-15 19:52:37.226967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5500) on tqpair=0x592510 00:14:43.111 [2024-07-15 19:52:37.226979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.111 [2024-07-15 19:52:37.226985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.111 [2024-07-15 19:52:37.226988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.226992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5680) on tqpair=0x592510 00:14:43.111 [2024-07-15 19:52:37.227003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.227015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.111 [2024-07-15 19:52:37.227032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5680, cid 5, qid 0 00:14:43.111 [2024-07-15 19:52:37.227078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.111 [2024-07-15 19:52:37.227085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.111 [2024-07-15 19:52:37.227088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5680) on tqpair=0x592510 00:14:43.111 [2024-07-15 19:52:37.227103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.227115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.111 [2024-07-15 19:52:37.227131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5680, cid 5, qid 0 00:14:43.111 [2024-07-15 19:52:37.227181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.111 [2024-07-15 19:52:37.227188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.111 [2024-07-15 19:52:37.227191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5680) on tqpair=0x592510 00:14:43.111 [2024-07-15 19:52:37.227206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.227218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.111 [2024-07-15 19:52:37.227233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5680, cid 5, qid 0 00:14:43.111 [2024-07-15 19:52:37.227297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.111 [2024-07-15 19:52:37.227305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.111 [2024-07-15 19:52:37.227309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5680) on tqpair=0x592510 00:14:43.111 [2024-07-15 19:52:37.227333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.227351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.111 [2024-07-15 19:52:37.227359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.227370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.111 [2024-07-15 19:52:37.227378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.227388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.111 [2024-07-15 19:52:37.227397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x592510) 00:14:43.111 [2024-07-15 19:52:37.227408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.111 [2024-07-15 19:52:37.227429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5680, cid 5, qid 0 00:14:43.111 [2024-07-15 19:52:37.227436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5500, cid 4, qid 0 00:14:43.111 [2024-07-15 19:52:37.227441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5800, cid 6, qid 0 00:14:43.111 [2024-07-15 19:52:37.227446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5980, cid 7, qid 0 00:14:43.111 [2024-07-15 19:52:37.227587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.111 [2024-07-15 19:52:37.227594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.111 [2024-07-15 19:52:37.227598] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227602] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x592510): datao=0, datal=8192, cccid=5 00:14:43.111 [2024-07-15 19:52:37.227607] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5f5680) on tqpair(0x592510): expected_datao=0, payload_size=8192 00:14:43.111 [2024-07-15 19:52:37.227611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227628] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227633] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.111 [2024-07-15 19:52:37.227645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.111 [2024-07-15 19:52:37.227649] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227653] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x592510): datao=0, datal=512, cccid=4 00:14:43.111 [2024-07-15 19:52:37.227658] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5f5500) on tqpair(0x592510): expected_datao=0, payload_size=512 00:14:43.111 [2024-07-15 19:52:37.227662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227669] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227672] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.111 [2024-07-15 19:52:37.227678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.111 [2024-07-15 19:52:37.227684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.111 [2024-07-15 19:52:37.227688] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227692] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x592510): datao=0, datal=512, cccid=6 00:14:43.112 [2024-07-15 19:52:37.227697] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5f5800) on tqpair(0x592510): expected_datao=0, payload_size=512 00:14:43.112 [2024-07-15 19:52:37.227701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227707] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227711] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.112 [2024-07-15 19:52:37.227723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.112 [2024-07-15 19:52:37.227726] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227730] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x592510): datao=0, datal=4096, cccid=7 00:14:43.112 [2024-07-15 19:52:37.227739] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5f5980) on tqpair(0x592510): expected_datao=0, payload_size=4096 00:14:43.112 [2024-07-15 19:52:37.227744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227751] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227755] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.112 [2024-07-15 19:52:37.227769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.112 [2024-07-15 19:52:37.227772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5680) on tqpair=0x592510 00:14:43.112 [2024-07-15 19:52:37.227793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.112 [2024-07-15 19:52:37.227800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.112 [2024-07-15 19:52:37.227803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5500) on tqpair=0x592510 00:14:43.112 [2024-07-15 19:52:37.227820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.112 [2024-07-15 19:52:37.227827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.112 [2024-07-15 19:52:37.227831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5800) on tqpair=0x592510 00:14:43.112 [2024-07-15 19:52:37.227843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.112 [2024-07-15 19:52:37.227849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.112 [2024-07-15 19:52:37.227852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.112 [2024-07-15 19:52:37.227856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5980) on tqpair=0x592510 00:14:43.112 ===================================================== 00:14:43.112 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.112 ===================================================== 00:14:43.112 Controller Capabilities/Features 00:14:43.112 ================================ 00:14:43.112 Vendor ID: 8086 00:14:43.112 Subsystem Vendor ID: 8086 00:14:43.112 Serial Number: SPDK00000000000001 00:14:43.112 Model Number: SPDK bdev Controller 00:14:43.112 Firmware Version: 24.09 00:14:43.112 Recommended Arb Burst: 6 00:14:43.112 IEEE OUI Identifier: e4 d2 5c 00:14:43.112 Multi-path I/O 00:14:43.112 May have multiple subsystem ports: Yes 00:14:43.112 May have multiple controllers: Yes 00:14:43.112 Associated with SR-IOV VF: No 00:14:43.112 Max Data Transfer Size: 131072 00:14:43.112 Max Number of Namespaces: 32 00:14:43.112 Max Number of I/O Queues: 127 00:14:43.112 NVMe Specification Version (VS): 1.3 00:14:43.112 NVMe Specification Version (Identify): 1.3 00:14:43.112 Maximum Queue Entries: 128 00:14:43.112 Contiguous Queues Required: Yes 00:14:43.112 Arbitration Mechanisms Supported 00:14:43.112 Weighted Round Robin: Not Supported 00:14:43.112 Vendor Specific: Not Supported 00:14:43.112 Reset Timeout: 15000 ms 00:14:43.112 Doorbell Stride: 4 bytes 00:14:43.112 NVM Subsystem Reset: Not Supported 00:14:43.112 Command Sets Supported 00:14:43.112 NVM Command Set: Supported 00:14:43.112 Boot Partition: Not Supported 00:14:43.112 Memory Page Size Minimum: 4096 bytes 00:14:43.112 Memory Page Size Maximum: 4096 bytes 00:14:43.112 Persistent Memory Region: Not Supported 00:14:43.112 Optional Asynchronous Events Supported 00:14:43.112 Namespace Attribute Notices: Supported 00:14:43.112 Firmware Activation Notices: Not Supported 00:14:43.112 ANA Change Notices: Not Supported 00:14:43.112 PLE Aggregate Log Change Notices: Not Supported 00:14:43.112 LBA Status Info Alert Notices: Not Supported 00:14:43.112 EGE Aggregate Log Change Notices: Not Supported 00:14:43.112 Normal NVM Subsystem Shutdown event: Not Supported 00:14:43.112 Zone Descriptor Change Notices: Not Supported 00:14:43.112 Discovery Log Change Notices: Not Supported 00:14:43.112 Controller Attributes 00:14:43.112 128-bit Host Identifier: Supported 00:14:43.112 Non-Operational Permissive Mode: Not Supported 00:14:43.112 NVM Sets: Not Supported 00:14:43.112 Read Recovery Levels: Not Supported 00:14:43.112 Endurance Groups: Not Supported 00:14:43.112 Predictable Latency Mode: Not Supported 00:14:43.112 Traffic Based Keep ALive: Not Supported 00:14:43.112 Namespace Granularity: Not Supported 00:14:43.112 SQ Associations: Not Supported 00:14:43.112 UUID List: Not Supported 00:14:43.112 Multi-Domain Subsystem: Not Supported 00:14:43.112 Fixed Capacity Management: Not Supported 00:14:43.112 Variable Capacity Management: Not Supported 00:14:43.112 Delete Endurance Group: Not Supported 00:14:43.112 Delete NVM Set: Not Supported 00:14:43.112 Extended LBA Formats Supported: Not Supported 00:14:43.112 Flexible Data Placement Supported: Not Supported 00:14:43.112 00:14:43.112 Controller Memory Buffer Support 00:14:43.112 ================================ 00:14:43.112 Supported: No 00:14:43.112 00:14:43.112 Persistent Memory Region Support 00:14:43.112 ================================ 00:14:43.112 Supported: No 00:14:43.112 00:14:43.112 Admin Command Set Attributes 00:14:43.112 ============================ 00:14:43.112 Security Send/Receive: Not Supported 00:14:43.112 Format NVM: Not Supported 00:14:43.112 Firmware Activate/Download: Not Supported 00:14:43.112 Namespace Management: Not Supported 00:14:43.112 Device Self-Test: Not Supported 00:14:43.112 Directives: Not Supported 00:14:43.112 NVMe-MI: Not Supported 00:14:43.112 Virtualization Management: Not Supported 00:14:43.112 Doorbell Buffer Config: Not Supported 00:14:43.112 Get LBA Status Capability: Not Supported 00:14:43.112 Command & Feature Lockdown Capability: Not Supported 00:14:43.112 Abort Command Limit: 4 00:14:43.112 Async Event Request Limit: 4 00:14:43.112 Number of Firmware Slots: N/A 00:14:43.112 Firmware Slot 1 Read-Only: N/A 00:14:43.112 Firmware Activation Without Reset: N/A 00:14:43.112 Multiple Update Detection Support: N/A 00:14:43.112 Firmware Update Granularity: No Information Provided 00:14:43.112 Per-Namespace SMART Log: No 00:14:43.112 Asymmetric Namespace Access Log Page: Not Supported 00:14:43.112 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:43.112 Command Effects Log Page: Supported 00:14:43.112 Get Log Page Extended Data: Supported 00:14:43.112 Telemetry Log Pages: Not Supported 00:14:43.112 Persistent Event Log Pages: Not Supported 00:14:43.112 Supported Log Pages Log Page: May Support 00:14:43.112 Commands Supported & Effects Log Page: Not Supported 00:14:43.112 Feature Identifiers & Effects Log Page:May Support 00:14:43.112 NVMe-MI Commands & Effects Log Page: May Support 00:14:43.112 Data Area 4 for Telemetry Log: Not Supported 00:14:43.112 Error Log Page Entries Supported: 128 00:14:43.112 Keep Alive: Supported 00:14:43.112 Keep Alive Granularity: 10000 ms 00:14:43.112 00:14:43.112 NVM Command Set Attributes 00:14:43.112 ========================== 00:14:43.112 Submission Queue Entry Size 00:14:43.112 Max: 64 00:14:43.112 Min: 64 00:14:43.112 Completion Queue Entry Size 00:14:43.112 Max: 16 00:14:43.112 Min: 16 00:14:43.112 Number of Namespaces: 32 00:14:43.112 Compare Command: Supported 00:14:43.112 Write Uncorrectable Command: Not Supported 00:14:43.112 Dataset Management Command: Supported 00:14:43.112 Write Zeroes Command: Supported 00:14:43.112 Set Features Save Field: Not Supported 00:14:43.112 Reservations: Supported 00:14:43.112 Timestamp: Not Supported 00:14:43.112 Copy: Supported 00:14:43.112 Volatile Write Cache: Present 00:14:43.112 Atomic Write Unit (Normal): 1 00:14:43.112 Atomic Write Unit (PFail): 1 00:14:43.112 Atomic Compare & Write Unit: 1 00:14:43.112 Fused Compare & Write: Supported 00:14:43.112 Scatter-Gather List 00:14:43.112 SGL Command Set: Supported 00:14:43.112 SGL Keyed: Supported 00:14:43.112 SGL Bit Bucket Descriptor: Not Supported 00:14:43.112 SGL Metadata Pointer: Not Supported 00:14:43.112 Oversized SGL: Not Supported 00:14:43.112 SGL Metadata Address: Not Supported 00:14:43.112 SGL Offset: Supported 00:14:43.112 Transport SGL Data Block: Not Supported 00:14:43.112 Replay Protected Memory Block: Not Supported 00:14:43.112 00:14:43.112 Firmware Slot Information 00:14:43.112 ========================= 00:14:43.112 Active slot: 1 00:14:43.112 Slot 1 Firmware Revision: 24.09 00:14:43.112 00:14:43.112 00:14:43.112 Commands Supported and Effects 00:14:43.112 ============================== 00:14:43.112 Admin Commands 00:14:43.112 -------------- 00:14:43.112 Get Log Page (02h): Supported 00:14:43.112 Identify (06h): Supported 00:14:43.112 Abort (08h): Supported 00:14:43.112 Set Features (09h): Supported 00:14:43.112 Get Features (0Ah): Supported 00:14:43.112 Asynchronous Event Request (0Ch): Supported 00:14:43.112 Keep Alive (18h): Supported 00:14:43.112 I/O Commands 00:14:43.112 ------------ 00:14:43.112 Flush (00h): Supported LBA-Change 00:14:43.112 Write (01h): Supported LBA-Change 00:14:43.113 Read (02h): Supported 00:14:43.113 Compare (05h): Supported 00:14:43.113 Write Zeroes (08h): Supported LBA-Change 00:14:43.113 Dataset Management (09h): Supported LBA-Change 00:14:43.113 Copy (19h): Supported LBA-Change 00:14:43.113 00:14:43.113 Error Log 00:14:43.113 ========= 00:14:43.113 00:14:43.113 Arbitration 00:14:43.113 =========== 00:14:43.113 Arbitration Burst: 1 00:14:43.113 00:14:43.113 Power Management 00:14:43.113 ================ 00:14:43.113 Number of Power States: 1 00:14:43.113 Current Power State: Power State #0 00:14:43.113 Power State #0: 00:14:43.113 Max Power: 0.00 W 00:14:43.113 Non-Operational State: Operational 00:14:43.113 Entry Latency: Not Reported 00:14:43.113 Exit Latency: Not Reported 00:14:43.113 Relative Read Throughput: 0 00:14:43.113 Relative Read Latency: 0 00:14:43.113 Relative Write Throughput: 0 00:14:43.113 Relative Write Latency: 0 00:14:43.113 Idle Power: Not Reported 00:14:43.113 Active Power: Not Reported 00:14:43.113 Non-Operational Permissive Mode: Not Supported 00:14:43.113 00:14:43.113 Health Information 00:14:43.113 ================== 00:14:43.113 Critical Warnings: 00:14:43.113 Available Spare Space: OK 00:14:43.113 Temperature: OK 00:14:43.113 Device Reliability: OK 00:14:43.113 Read Only: No 00:14:43.113 Volatile Memory Backup: OK 00:14:43.113 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:43.113 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:43.113 Available Spare: 0% 00:14:43.113 Available Spare Threshold: 0% 00:14:43.113 Life Percentage Used:[2024-07-15 19:52:37.227970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.227978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x592510) 00:14:43.113 [2024-07-15 19:52:37.227986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.113 [2024-07-15 19:52:37.228009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5980, cid 7, qid 0 00:14:43.113 [2024-07-15 19:52:37.228053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.113 [2024-07-15 19:52:37.228060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.113 [2024-07-15 19:52:37.228064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.228069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5980) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.228119] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:43.113 [2024-07-15 19:52:37.228131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f4f00) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.228137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.113 [2024-07-15 19:52:37.228143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5080) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.228148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.113 [2024-07-15 19:52:37.228153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5200) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.228158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.113 [2024-07-15 19:52:37.228164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.228168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.113 [2024-07-15 19:52:37.228178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.228182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.228186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.113 [2024-07-15 19:52:37.228194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.113 [2024-07-15 19:52:37.228215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.113 [2024-07-15 19:52:37.228258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.113 [2024-07-15 19:52:37.231335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.113 [2024-07-15 19:52:37.231364] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.231387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.113 [2024-07-15 19:52:37.231415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.113 [2024-07-15 19:52:37.231453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.113 [2024-07-15 19:52:37.231525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.113 [2024-07-15 19:52:37.231532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.113 [2024-07-15 19:52:37.231535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.231545] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:43.113 [2024-07-15 19:52:37.231550] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:43.113 [2024-07-15 19:52:37.231561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.113 [2024-07-15 19:52:37.231578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.113 [2024-07-15 19:52:37.231595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.113 [2024-07-15 19:52:37.231649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.113 [2024-07-15 19:52:37.231656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.113 [2024-07-15 19:52:37.231660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.231675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.113 [2024-07-15 19:52:37.231691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.113 [2024-07-15 19:52:37.231707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.113 [2024-07-15 19:52:37.231755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.113 [2024-07-15 19:52:37.231761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.113 [2024-07-15 19:52:37.231765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.231779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.113 [2024-07-15 19:52:37.231795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.113 [2024-07-15 19:52:37.231811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.113 [2024-07-15 19:52:37.231858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.113 [2024-07-15 19:52:37.231865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.113 [2024-07-15 19:52:37.231868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.113 [2024-07-15 19:52:37.231883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.113 [2024-07-15 19:52:37.231899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.113 [2024-07-15 19:52:37.231914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.113 [2024-07-15 19:52:37.231959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.113 [2024-07-15 19:52:37.231965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.113 [2024-07-15 19:52:37.231969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.113 [2024-07-15 19:52:37.231973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.231984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.231988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.231992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.231999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.232062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.232069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.232072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232077] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.232087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.232103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.232163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.232170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.232174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.232188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.232204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.232282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.232291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.232294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.232310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.232326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.232392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.232399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.232403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.232418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.232434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.232501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.232508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.232512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.232527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.232543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.232604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.232610] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.232614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.232629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.232645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.232711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.232718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.232722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.232736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.232752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.232830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.232837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.232841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.232856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.232872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.232935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.232941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.232945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.232960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.232968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.232975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.232991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.233039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.233046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.233050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.233054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.114 [2024-07-15 19:52:37.233064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.233069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.233073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.114 [2024-07-15 19:52:37.233080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.114 [2024-07-15 19:52:37.233096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.114 [2024-07-15 19:52:37.233146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.114 [2024-07-15 19:52:37.233153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.114 [2024-07-15 19:52:37.233156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.114 [2024-07-15 19:52:37.233160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.233171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.233187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.233202] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.233248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.233254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.233258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.233285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.233302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.233320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.233366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.233373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.233377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.233392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.233408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.233424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.233472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.233478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.233482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.233497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.233513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.233528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.233575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.233582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.233586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.233601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.233617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.233632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.233683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.233689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.233693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.233708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233716] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.233724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.233740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.233787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.233794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.233797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.233812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.233828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.233844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.233891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.233903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.233907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.233922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.233931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.233939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.233955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.234006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.234013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.234017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.234031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.234047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.234063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.234112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.234118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.234122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.234137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234145] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.234153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.234168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.234216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.234222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.234226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.234240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.234256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.234284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.234331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.234338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.115 [2024-07-15 19:52:37.234341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.115 [2024-07-15 19:52:37.234356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.115 [2024-07-15 19:52:37.234365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.115 [2024-07-15 19:52:37.234372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.115 [2024-07-15 19:52:37.234389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.115 [2024-07-15 19:52:37.234439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.115 [2024-07-15 19:52:37.234446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.234450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.234465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.116 [2024-07-15 19:52:37.234481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.116 [2024-07-15 19:52:37.234496] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.116 [2024-07-15 19:52:37.234544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.116 [2024-07-15 19:52:37.234550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.234554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.234569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.116 [2024-07-15 19:52:37.234584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.116 [2024-07-15 19:52:37.234600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.116 [2024-07-15 19:52:37.234647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.116 [2024-07-15 19:52:37.234654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.234658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.234672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.116 [2024-07-15 19:52:37.234688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.116 [2024-07-15 19:52:37.234704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.116 [2024-07-15 19:52:37.234752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.116 [2024-07-15 19:52:37.234758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.234762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.234777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.116 [2024-07-15 19:52:37.234793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.116 [2024-07-15 19:52:37.234809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.116 [2024-07-15 19:52:37.234856] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.116 [2024-07-15 19:52:37.234863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.234867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.234881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.116 [2024-07-15 19:52:37.234897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.116 [2024-07-15 19:52:37.234914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.116 [2024-07-15 19:52:37.234970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.116 [2024-07-15 19:52:37.234976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.234980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.234995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.234999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.235003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.116 [2024-07-15 19:52:37.235011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.116 [2024-07-15 19:52:37.235026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.116 [2024-07-15 19:52:37.235071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.116 [2024-07-15 19:52:37.235077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.235081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.235085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.235095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.235100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.235104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.116 [2024-07-15 19:52:37.235111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.116 [2024-07-15 19:52:37.235127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.116 [2024-07-15 19:52:37.235174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.116 [2024-07-15 19:52:37.235181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.235184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.235189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.235199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.235204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.235208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.116 [2024-07-15 19:52:37.235215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.116 [2024-07-15 19:52:37.235231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.116 [2024-07-15 19:52:37.239306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.116 [2024-07-15 19:52:37.239323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.239327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.239332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.239346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.239352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.239355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x592510) 00:14:43.116 [2024-07-15 19:52:37.239364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.116 [2024-07-15 19:52:37.239389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5f5380, cid 3, qid 0 00:14:43.116 [2024-07-15 19:52:37.239437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.116 [2024-07-15 19:52:37.239444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.116 [2024-07-15 19:52:37.239447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.116 [2024-07-15 19:52:37.239452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5f5380) on tqpair=0x592510 00:14:43.116 [2024-07-15 19:52:37.239460] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:14:43.116 0% 00:14:43.116 Data Units Read: 0 00:14:43.116 Data Units Written: 0 00:14:43.116 Host Read Commands: 0 00:14:43.116 Host Write Commands: 0 00:14:43.116 Controller Busy Time: 0 minutes 00:14:43.116 Power Cycles: 0 00:14:43.116 Power On Hours: 0 hours 00:14:43.116 Unsafe Shutdowns: 0 00:14:43.116 Unrecoverable Media Errors: 0 00:14:43.116 Lifetime Error Log Entries: 0 00:14:43.116 Warning Temperature Time: 0 minutes 00:14:43.116 Critical Temperature Time: 0 minutes 00:14:43.116 00:14:43.116 Number of Queues 00:14:43.116 ================ 00:14:43.116 Number of I/O Submission Queues: 127 00:14:43.116 Number of I/O Completion Queues: 127 00:14:43.116 00:14:43.116 Active Namespaces 00:14:43.116 ================= 00:14:43.116 Namespace ID:1 00:14:43.116 Error Recovery Timeout: Unlimited 00:14:43.116 Command Set Identifier: NVM (00h) 00:14:43.116 Deallocate: Supported 00:14:43.116 Deallocated/Unwritten Error: Not Supported 00:14:43.116 Deallocated Read Value: Unknown 00:14:43.116 Deallocate in Write Zeroes: Not Supported 00:14:43.116 Deallocated Guard Field: 0xFFFF 00:14:43.116 Flush: Supported 00:14:43.116 Reservation: Supported 00:14:43.116 Namespace Sharing Capabilities: Multiple Controllers 00:14:43.116 Size (in LBAs): 131072 (0GiB) 00:14:43.116 Capacity (in LBAs): 131072 (0GiB) 00:14:43.116 Utilization (in LBAs): 131072 (0GiB) 00:14:43.116 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:43.116 EUI64: ABCDEF0123456789 00:14:43.116 UUID: 288ac6ca-5275-48d1-a982-22d6242392ab 00:14:43.116 Thin Provisioning: Not Supported 00:14:43.116 Per-NS Atomic Units: Yes 00:14:43.116 Atomic Boundary Size (Normal): 0 00:14:43.116 Atomic Boundary Size (PFail): 0 00:14:43.116 Atomic Boundary Offset: 0 00:14:43.116 Maximum Single Source Range Length: 65535 00:14:43.116 Maximum Copy Length: 65535 00:14:43.116 Maximum Source Range Count: 1 00:14:43.116 NGUID/EUI64 Never Reused: No 00:14:43.116 Namespace Write Protected: No 00:14:43.116 Number of LBA Formats: 1 00:14:43.116 Current LBA Format: LBA Format #00 00:14:43.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:43.116 00:14:43.116 19:52:37 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:43.116 19:52:37 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.116 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.116 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.116 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.117 19:52:37 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:43.117 19:52:37 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:43.117 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.117 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:43.117 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.117 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:43.117 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.117 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.117 rmmod nvme_tcp 00:14:43.117 rmmod nvme_fabrics 00:14:43.377 rmmod nvme_keyring 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74886 ']' 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74886 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74886 ']' 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74886 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74886 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:43.377 killing process with pid 74886 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74886' 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74886 00:14:43.377 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74886 00:14:43.636 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:43.637 00:14:43.637 real 0m2.470s 00:14:43.637 user 0m6.741s 00:14:43.637 sys 0m0.666s 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.637 19:52:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.637 ************************************ 00:14:43.637 END TEST nvmf_identify 00:14:43.637 ************************************ 00:14:43.637 19:52:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:43.637 19:52:37 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:43.637 19:52:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:43.637 19:52:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.637 19:52:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.637 ************************************ 00:14:43.637 START TEST nvmf_perf 00:14:43.637 ************************************ 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:43.637 * Looking for test storage... 00:14:43.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:43.637 Cannot find device "nvmf_tgt_br" 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:43.637 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.896 Cannot find device "nvmf_tgt_br2" 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:43.896 Cannot find device "nvmf_tgt_br" 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:43.896 Cannot find device "nvmf_tgt_br2" 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:43.896 19:52:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:43.896 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:44.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:14:44.154 00:14:44.154 --- 10.0.0.2 ping statistics --- 00:14:44.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.154 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:44.154 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:44.154 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:44.154 00:14:44.154 --- 10.0.0.3 ping statistics --- 00:14:44.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.154 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:44.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:44.154 00:14:44.154 --- 10.0.0.1 ping statistics --- 00:14:44.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.154 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75089 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75089 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75089 ']' 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.154 19:52:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:44.154 [2024-07-15 19:52:38.258206] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:44.154 [2024-07-15 19:52:38.258338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.154 [2024-07-15 19:52:38.393039] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.412 [2024-07-15 19:52:38.507447] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.412 [2024-07-15 19:52:38.507519] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.412 [2024-07-15 19:52:38.507545] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.412 [2024-07-15 19:52:38.507553] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.412 [2024-07-15 19:52:38.507560] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.412 [2024-07-15 19:52:38.507704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.412 [2024-07-15 19:52:38.507866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.412 [2024-07-15 19:52:38.508485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.412 [2024-07-15 19:52:38.508492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.412 [2024-07-15 19:52:38.565369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:44.977 19:52:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.977 19:52:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:14:44.977 19:52:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:44.977 19:52:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.977 19:52:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:45.235 19:52:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.235 19:52:39 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:45.235 19:52:39 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:45.493 19:52:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:45.493 19:52:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:45.751 19:52:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:45.751 19:52:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:46.009 19:52:40 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:46.009 19:52:40 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:46.009 19:52:40 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:46.009 19:52:40 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:46.009 19:52:40 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:46.267 [2024-07-15 19:52:40.429000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.267 19:52:40 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:46.526 19:52:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:46.526 19:52:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:46.784 19:52:41 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:46.784 19:52:41 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:47.042 19:52:41 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.299 [2024-07-15 19:52:41.442396] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.299 19:52:41 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:47.557 19:52:41 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:47.557 19:52:41 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:47.557 19:52:41 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:47.557 19:52:41 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:48.934 Initializing NVMe Controllers 00:14:48.934 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:48.934 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:48.934 Initialization complete. Launching workers. 00:14:48.934 ======================================================== 00:14:48.934 Latency(us) 00:14:48.934 Device Information : IOPS MiB/s Average min max 00:14:48.934 PCIE (0000:00:10.0) NSID 1 from core 0: 22624.00 88.38 1414.48 338.23 8099.49 00:14:48.934 ======================================================== 00:14:48.934 Total : 22624.00 88.38 1414.48 338.23 8099.49 00:14:48.934 00:14:48.934 19:52:42 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:49.868 Initializing NVMe Controllers 00:14:49.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:49.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:49.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:49.868 Initialization complete. Launching workers. 00:14:49.868 ======================================================== 00:14:49.868 Latency(us) 00:14:49.868 Device Information : IOPS MiB/s Average min max 00:14:49.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3305.00 12.91 302.24 107.91 7129.65 00:14:49.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8046.97 5976.30 11965.80 00:14:49.868 ======================================================== 00:14:49.868 Total : 3430.00 13.40 584.49 107.91 11965.80 00:14:49.868 00:14:50.163 19:52:44 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:51.539 Initializing NVMe Controllers 00:14:51.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:51.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:51.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:51.539 Initialization complete. Launching workers. 00:14:51.539 ======================================================== 00:14:51.539 Latency(us) 00:14:51.539 Device Information : IOPS MiB/s Average min max 00:14:51.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8693.15 33.96 3680.62 543.21 9439.51 00:14:51.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3960.70 15.47 8127.75 5971.18 16048.81 00:14:51.539 ======================================================== 00:14:51.539 Total : 12653.85 49.43 5072.59 543.21 16048.81 00:14:51.539 00:14:51.539 19:52:45 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:51.539 19:52:45 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:54.102 Initializing NVMe Controllers 00:14:54.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:54.102 Controller IO queue size 128, less than required. 00:14:54.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.102 Controller IO queue size 128, less than required. 00:14:54.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:54.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:54.102 Initialization complete. Launching workers. 00:14:54.102 ======================================================== 00:14:54.102 Latency(us) 00:14:54.102 Device Information : IOPS MiB/s Average min max 00:14:54.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1937.43 484.36 66947.07 32851.53 107530.99 00:14:54.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 670.98 167.74 194140.46 69039.25 319182.84 00:14:54.102 ======================================================== 00:14:54.102 Total : 2608.41 652.10 99665.78 32851.53 319182.84 00:14:54.102 00:14:54.102 19:52:48 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:54.102 Initializing NVMe Controllers 00:14:54.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:54.102 Controller IO queue size 128, less than required. 00:14:54.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.102 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:54.102 Controller IO queue size 128, less than required. 00:14:54.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.102 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:54.102 WARNING: Some requested NVMe devices were skipped 00:14:54.102 No valid NVMe controllers or AIO or URING devices found 00:14:54.102 19:52:48 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:56.635 Initializing NVMe Controllers 00:14:56.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.635 Controller IO queue size 128, less than required. 00:14:56.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:56.635 Controller IO queue size 128, less than required. 00:14:56.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:56.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:56.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:56.635 Initialization complete. Launching workers. 00:14:56.635 00:14:56.635 ==================== 00:14:56.635 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:56.635 TCP transport: 00:14:56.635 polls: 10464 00:14:56.635 idle_polls: 7111 00:14:56.635 sock_completions: 3353 00:14:56.635 nvme_completions: 6523 00:14:56.635 submitted_requests: 9786 00:14:56.635 queued_requests: 1 00:14:56.635 00:14:56.635 ==================== 00:14:56.635 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:56.635 TCP transport: 00:14:56.635 polls: 10516 00:14:56.635 idle_polls: 6323 00:14:56.635 sock_completions: 4193 00:14:56.635 nvme_completions: 6653 00:14:56.635 submitted_requests: 9898 00:14:56.635 queued_requests: 1 00:14:56.635 ======================================================== 00:14:56.635 Latency(us) 00:14:56.635 Device Information : IOPS MiB/s Average min max 00:14:56.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1630.32 407.58 80565.21 42284.81 129940.10 00:14:56.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1662.82 415.70 77144.58 31633.50 119494.84 00:14:56.635 ======================================================== 00:14:56.635 Total : 3293.14 823.28 78838.02 31633.50 129940.10 00:14:56.635 00:14:56.635 19:52:50 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:56.635 19:52:50 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.894 rmmod nvme_tcp 00:14:56.894 rmmod nvme_fabrics 00:14:56.894 rmmod nvme_keyring 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75089 ']' 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75089 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75089 ']' 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75089 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75089 00:14:56.894 killing process with pid 75089 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75089' 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75089 00:14:56.894 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75089 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:57.492 00:14:57.492 real 0m13.931s 00:14:57.492 user 0m50.984s 00:14:57.492 sys 0m4.038s 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.492 19:52:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:57.492 ************************************ 00:14:57.492 END TEST nvmf_perf 00:14:57.492 ************************************ 00:14:57.492 19:52:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:57.492 19:52:51 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:57.492 19:52:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:57.492 19:52:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.492 19:52:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:57.492 ************************************ 00:14:57.492 START TEST nvmf_fio_host 00:14:57.492 ************************************ 00:14:57.492 19:52:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:57.751 * Looking for test storage... 00:14:57.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.751 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:57.752 Cannot find device "nvmf_tgt_br" 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.752 Cannot find device "nvmf_tgt_br2" 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:57.752 Cannot find device "nvmf_tgt_br" 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:57.752 Cannot find device "nvmf_tgt_br2" 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.752 19:52:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:58.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:58.011 00:14:58.011 --- 10.0.0.2 ping statistics --- 00:14:58.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.011 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:58.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:58.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:14:58.011 00:14:58.011 --- 10.0.0.3 ping statistics --- 00:14:58.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.011 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:58.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:14:58.011 00:14:58.011 --- 10.0.0.1 ping statistics --- 00:14:58.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.011 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:58.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75492 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75492 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75492 ']' 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:58.011 19:52:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.011 [2024-07-15 19:52:52.201864] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:58.011 [2024-07-15 19:52:52.201959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.269 [2024-07-15 19:52:52.335362] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.269 [2024-07-15 19:52:52.430991] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.269 [2024-07-15 19:52:52.431307] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.269 [2024-07-15 19:52:52.431453] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.269 [2024-07-15 19:52:52.431589] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.269 [2024-07-15 19:52:52.431620] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.269 [2024-07-15 19:52:52.431825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.269 [2024-07-15 19:52:52.431931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.269 [2024-07-15 19:52:52.432013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.269 [2024-07-15 19:52:52.432012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.269 [2024-07-15 19:52:52.489787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:59.204 19:52:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.204 19:52:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:14:59.204 19:52:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:59.204 [2024-07-15 19:52:53.384868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.204 19:52:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:59.204 19:52:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.204 19:52:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:59.463 19:52:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:59.463 Malloc1 00:14:59.463 19:52:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:59.721 19:52:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.980 19:52:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.239 [2024-07-15 19:52:54.404235] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.239 19:52:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:00.497 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:00.756 19:52:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:00.756 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:00.756 fio-3.35 00:15:00.756 Starting 1 thread 00:15:03.289 00:15:03.289 test: (groupid=0, jobs=1): err= 0: pid=75570: Mon Jul 15 19:52:57 2024 00:15:03.289 read: IOPS=8933, BW=34.9MiB/s (36.6MB/s)(70.0MiB/2007msec) 00:15:03.289 slat (nsec): min=1908, max=355376, avg=2585.99, stdev=3632.57 00:15:03.289 clat (usec): min=2664, max=13492, avg=7439.15, stdev=549.46 00:15:03.289 lat (usec): min=2714, max=13494, avg=7441.73, stdev=549.21 00:15:03.289 clat percentiles (usec): 00:15:03.289 | 1.00th=[ 6325], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:15:03.289 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:15:03.289 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8225], 00:15:03.289 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[12387], 99.95th=[13173], 00:15:03.289 | 99.99th=[13435] 00:15:03.289 bw ( KiB/s): min=34872, max=36912, per=99.99%, avg=35730.00, stdev=881.99, samples=4 00:15:03.289 iops : min= 8718, max= 9228, avg=8932.50, stdev=220.50, samples=4 00:15:03.289 write: IOPS=8952, BW=35.0MiB/s (36.7MB/s)(70.2MiB/2007msec); 0 zone resets 00:15:03.289 slat (usec): min=2, max=237, avg= 2.65, stdev= 2.41 00:15:03.289 clat (usec): min=2514, max=12622, avg=6801.12, stdev=490.70 00:15:03.289 lat (usec): min=2528, max=12625, avg=6803.77, stdev=490.56 00:15:03.289 clat percentiles (usec): 00:15:03.289 | 1.00th=[ 5800], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:15:03.289 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:15:03.289 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7504], 00:15:03.289 | 99.00th=[ 7963], 99.50th=[ 8094], 99.90th=[10552], 99.95th=[11863], 00:15:03.289 | 99.99th=[12256] 00:15:03.289 bw ( KiB/s): min=35608, max=35968, per=100.00%, avg=35810.00, stdev=174.95, samples=4 00:15:03.289 iops : min= 8902, max= 8992, avg=8952.50, stdev=43.74, samples=4 00:15:03.289 lat (msec) : 4=0.08%, 10=99.76%, 20=0.16% 00:15:03.289 cpu : usr=67.65%, sys=23.83%, ctx=24, majf=0, minf=7 00:15:03.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:03.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:03.289 issued rwts: total=17930,17967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:03.289 00:15:03.289 Run status group 0 (all jobs): 00:15:03.289 READ: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.0MiB (73.4MB), run=2007-2007msec 00:15:03.289 WRITE: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.2MiB (73.6MB), run=2007-2007msec 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:03.289 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:03.290 19:52:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:03.290 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:03.290 fio-3.35 00:15:03.290 Starting 1 thread 00:15:05.821 00:15:05.821 test: (groupid=0, jobs=1): err= 0: pid=75613: Mon Jul 15 19:52:59 2024 00:15:05.821 read: IOPS=8286, BW=129MiB/s (136MB/s)(260MiB/2007msec) 00:15:05.821 slat (usec): min=2, max=152, avg= 3.83, stdev= 2.35 00:15:05.821 clat (usec): min=2105, max=16892, avg=8543.28, stdev=2536.01 00:15:05.821 lat (usec): min=2108, max=16896, avg=8547.12, stdev=2536.07 00:15:05.821 clat percentiles (usec): 00:15:05.821 | 1.00th=[ 4228], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 6259], 00:15:05.821 | 30.00th=[ 6915], 40.00th=[ 7570], 50.00th=[ 8291], 60.00th=[ 8979], 00:15:05.821 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[11731], 95.00th=[12911], 00:15:05.821 | 99.00th=[15664], 99.50th=[16057], 99.90th=[16581], 99.95th=[16712], 00:15:05.821 | 99.99th=[16909] 00:15:05.821 bw ( KiB/s): min=63104, max=73856, per=51.31%, avg=68028.25, stdev=5240.97, samples=4 00:15:05.821 iops : min= 3944, max= 4616, avg=4251.75, stdev=327.55, samples=4 00:15:05.821 write: IOPS=4903, BW=76.6MiB/s (80.3MB/s)(139MiB/1820msec); 0 zone resets 00:15:05.821 slat (usec): min=31, max=320, avg=38.59, stdev= 8.58 00:15:05.821 clat (usec): min=4419, max=21866, avg=12094.64, stdev=2194.59 00:15:05.821 lat (usec): min=4454, max=21899, avg=12133.23, stdev=2195.14 00:15:05.821 clat percentiles (usec): 00:15:05.821 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:15:05.821 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:15:05.821 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15008], 95.00th=[16319], 00:15:05.821 | 99.00th=[17695], 99.50th=[19006], 99.90th=[20841], 99.95th=[21365], 00:15:05.821 | 99.99th=[21890] 00:15:05.821 bw ( KiB/s): min=64864, max=77376, per=90.13%, avg=70707.00, stdev=5889.22, samples=4 00:15:05.821 iops : min= 4054, max= 4836, avg=4419.00, stdev=367.94, samples=4 00:15:05.821 lat (msec) : 4=0.36%, 10=51.84%, 20=47.72%, 50=0.08% 00:15:05.821 cpu : usr=80.32%, sys=14.70%, ctx=16, majf=0, minf=15 00:15:05.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:05.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.821 issued rwts: total=16631,8924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.821 00:15:05.821 Run status group 0 (all jobs): 00:15:05.821 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=260MiB (272MB), run=2007-2007msec 00:15:05.821 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=139MiB (146MB), run=1820-1820msec 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.821 19:52:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.821 rmmod nvme_tcp 00:15:05.821 rmmod nvme_fabrics 00:15:05.821 rmmod nvme_keyring 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75492 ']' 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75492 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75492 ']' 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75492 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75492 00:15:05.821 killing process with pid 75492 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75492' 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75492 00:15:05.821 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75492 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:06.387 ************************************ 00:15:06.387 END TEST nvmf_fio_host 00:15:06.387 ************************************ 00:15:06.387 00:15:06.387 real 0m8.652s 00:15:06.387 user 0m35.481s 00:15:06.387 sys 0m2.360s 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.387 19:53:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:06.387 19:53:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:06.387 19:53:00 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:06.387 19:53:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:06.387 19:53:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.387 19:53:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:06.387 ************************************ 00:15:06.387 START TEST nvmf_failover 00:15:06.387 ************************************ 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:06.387 * Looking for test storage... 00:15:06.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.387 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:06.388 Cannot find device "nvmf_tgt_br" 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.388 Cannot find device "nvmf_tgt_br2" 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:06.388 Cannot find device "nvmf_tgt_br" 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:06.388 Cannot find device "nvmf_tgt_br2" 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:06.388 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:06.646 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:06.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:06.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:06.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:06.647 00:15:06.647 --- 10.0.0.2 ping statistics --- 00:15:06.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.647 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:06.647 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:06.647 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:06.647 00:15:06.647 --- 10.0.0.3 ping statistics --- 00:15:06.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.647 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:06.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:06.647 00:15:06.647 --- 10.0.0.1 ping statistics --- 00:15:06.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.647 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75829 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75829 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75829 ']' 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.647 19:53:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:06.906 [2024-07-15 19:53:00.932727] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:15:06.906 [2024-07-15 19:53:00.932854] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.906 [2024-07-15 19:53:01.076565] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:07.165 [2024-07-15 19:53:01.197670] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.165 [2024-07-15 19:53:01.197923] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.165 [2024-07-15 19:53:01.198104] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.165 [2024-07-15 19:53:01.198241] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.165 [2024-07-15 19:53:01.198330] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.165 [2024-07-15 19:53:01.199077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.165 [2024-07-15 19:53:01.199192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.165 [2024-07-15 19:53:01.199202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.165 [2024-07-15 19:53:01.257852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:07.733 19:53:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.733 19:53:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:07.733 19:53:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:07.733 19:53:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.733 19:53:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:07.733 19:53:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.733 19:53:01 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:08.017 [2024-07-15 19:53:02.221269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.017 19:53:02 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:08.585 Malloc0 00:15:08.586 19:53:02 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.586 19:53:02 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.845 19:53:03 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.105 [2024-07-15 19:53:03.249708] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.105 19:53:03 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:09.366 [2024-07-15 19:53:03.477956] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:09.366 19:53:03 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:09.628 [2024-07-15 19:53:03.746168] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:09.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75886 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75886 /var/tmp/bdevperf.sock 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75886 ']' 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.628 19:53:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:10.565 19:53:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.565 19:53:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:10.565 19:53:04 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:10.823 NVMe0n1 00:15:10.823 19:53:05 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:11.390 00:15:11.390 19:53:05 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75910 00:15:11.390 19:53:05 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:11.390 19:53:05 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:12.327 19:53:06 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.585 19:53:06 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:15.870 19:53:09 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:15.870 00:15:15.870 19:53:10 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:16.438 19:53:10 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:19.722 19:53:13 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.722 [2024-07-15 19:53:13.674260] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.722 19:53:13 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:20.658 19:53:14 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:20.997 19:53:14 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75910 00:15:26.277 0 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75886 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75886 ']' 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75886 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75886 00:15:26.537 killing process with pid 75886 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75886' 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75886 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75886 00:15:26.537 19:53:20 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:26.802 [2024-07-15 19:53:03.814034] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:15:26.802 [2024-07-15 19:53:03.814199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75886 ] 00:15:26.802 [2024-07-15 19:53:03.952030] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.802 [2024-07-15 19:53:04.062721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.802 [2024-07-15 19:53:04.118944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:26.802 Running I/O for 15 seconds... 00:15:26.802 [2024-07-15 19:53:06.664287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.802 [2024-07-15 19:53:06.664365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.802 [2024-07-15 19:53:06.664410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.802 [2024-07-15 19:53:06.664441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.802 [2024-07-15 19:53:06.664471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.802 [2024-07-15 19:53:06.664502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.802 [2024-07-15 19:53:06.664531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.802 [2024-07-15 19:53:06.664561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.802 [2024-07-15 19:53:06.664591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.802 [2024-07-15 19:53:06.664621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.802 [2024-07-15 19:53:06.664650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.802 [2024-07-15 19:53:06.664710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.802 [2024-07-15 19:53:06.664743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.802 [2024-07-15 19:53:06.664772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.802 [2024-07-15 19:53:06.664801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.802 [2024-07-15 19:53:06.664829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.802 [2024-07-15 19:53:06.664844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.664860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.664873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.664889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.664902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.664917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.664931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.664946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.664962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.664978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.664991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.803 [2024-07-15 19:53:06.665849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.665977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.665991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.666007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.666020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.666036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.666050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.666065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.666079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.666094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.666108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.666123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.666137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.803 [2024-07-15 19:53:06.666152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.803 [2024-07-15 19:53:06.666167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.804 [2024-07-15 19:53:06.666846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.666978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.666992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.804 [2024-07-15 19:53:06.667470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.804 [2024-07-15 19:53:06.667486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.805 [2024-07-15 19:53:06.667500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.805 [2024-07-15 19:53:06.667529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.805 [2024-07-15 19:53:06.667558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.805 [2024-07-15 19:53:06.667586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.805 [2024-07-15 19:53:06.667616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.805 [2024-07-15 19:53:06.667649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.805 [2024-07-15 19:53:06.667679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.805 [2024-07-15 19:53:06.667708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.805 [2024-07-15 19:53:06.667737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.805 [2024-07-15 19:53:06.667766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.805 [2024-07-15 19:53:06.667802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124b2b0 is same with the state(5) to be set 00:15:26.805 [2024-07-15 19:53:06.667841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.667853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.667864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76344 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.667877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.667909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.667919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76800 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.667937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.667961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.667971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76808 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.667984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.667998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76816 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76824 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76832 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76840 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76848 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76856 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76864 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76872 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76880 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76888 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76896 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76904 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76912 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.805 [2024-07-15 19:53:06.668650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.805 [2024-07-15 19:53:06.668660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76920 len:8 PRP1 0x0 PRP2 0x0 00:15:26.805 [2024-07-15 19:53:06.668673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668730] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x124b2b0 was disconnected and freed. reset controller. 00:15:26.805 [2024-07-15 19:53:06.668756] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:26.805 [2024-07-15 19:53:06.668837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.805 [2024-07-15 19:53:06.668859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.805 [2024-07-15 19:53:06.668874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.806 [2024-07-15 19:53:06.668888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:06.668902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.806 [2024-07-15 19:53:06.668915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:06.668929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.806 [2024-07-15 19:53:06.668941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:06.668955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:26.806 [2024-07-15 19:53:06.672781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:26.806 [2024-07-15 19:53:06.672835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ea710 (9): Bad file descriptor 00:15:26.806 [2024-07-15 19:53:06.707259] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:26.806 [2024-07-15 19:53:10.401933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.401993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.806 [2024-07-15 19:53:10.402406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.806 [2024-07-15 19:53:10.402939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.806 [2024-07-15 19:53:10.402960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.402974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.402988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.807 [2024-07-15 19:53:10.403707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.403983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.403997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.404010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.404024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.404037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.404051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.404074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.404089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.404102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.404116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.404129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.404159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.404188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.404203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.404216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.404231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.807 [2024-07-15 19:53:10.404244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.807 [2024-07-15 19:53:10.404259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.404288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.404317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.404357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.404386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.404415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.404445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.404474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.404525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.404969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.404992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.405010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.405039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.405069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.808 [2024-07-15 19:53:10.405098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.808 [2024-07-15 19:53:10.405608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124cd80 is same with the state(5) to be set 00:15:26.808 [2024-07-15 19:53:10.405638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.808 [2024-07-15 19:53:10.405648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.808 [2024-07-15 19:53:10.405659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 00:15:26.808 [2024-07-15 19:53:10.405686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.808 [2024-07-15 19:53:10.405709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.808 [2024-07-15 19:53:10.405719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:15:26.808 [2024-07-15 19:53:10.405730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.808 [2024-07-15 19:53:10.405743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.808 [2024-07-15 19:53:10.405752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.405761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97456 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.405773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.405785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.405794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.405810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.405838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.405851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.405859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.405868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.405880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.405892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.405901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.405910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.405921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.405933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.405942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.405951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.405962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.405974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.405983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.405992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.406024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.406034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.406067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.406076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.406109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.406118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.406183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.406193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.406228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.406237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.406271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.406281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.406314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.406324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.406369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.406379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.809 [2024-07-15 19:53:10.406413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.809 [2024-07-15 19:53:10.406423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:15:26.809 [2024-07-15 19:53:10.406435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406500] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x124cd80 was disconnected and freed. reset controller. 00:15:26.809 [2024-07-15 19:53:10.406541] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:26.809 [2024-07-15 19:53:10.406591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.809 [2024-07-15 19:53:10.406611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.809 [2024-07-15 19:53:10.406637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.809 [2024-07-15 19:53:10.406672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.809 [2024-07-15 19:53:10.406698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:10.406710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:26.809 [2024-07-15 19:53:10.406743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ea710 (9): Bad file descriptor 00:15:26.809 [2024-07-15 19:53:10.410610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:26.809 [2024-07-15 19:53:10.448911] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:26.809 [2024-07-15 19:53:14.943018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.809 [2024-07-15 19:53:14.943459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.809 [2024-07-15 19:53:14.943475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.943489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.943978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.943990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.810 [2024-07-15 19:53:14.944431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.944460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.944490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.944525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.944555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.944584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.944623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.944652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.810 [2024-07-15 19:53:14.944682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.810 [2024-07-15 19:53:14.944698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.944711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.944727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.944740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.944757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.944771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.944798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.944821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.944856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.944870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.944886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.944899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.944915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.944929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.944944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.944958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.944974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.944988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.945463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.945494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.945524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.945553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.945583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.945612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.945641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.811 [2024-07-15 19:53:14.945671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.945975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.945990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.946004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.811 [2024-07-15 19:53:14.946019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.811 [2024-07-15 19:53:14.946033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.812 [2024-07-15 19:53:14.946062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.812 [2024-07-15 19:53:14.946092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.812 [2024-07-15 19:53:14.946122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.812 [2024-07-15 19:53:14.946153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.812 [2024-07-15 19:53:14.946183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:26.812 [2024-07-15 19:53:14.946212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:26.812 [2024-07-15 19:53:14.946683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268d90 is same with the state(5) to be set 00:15:26.812 [2024-07-15 19:53:14.946719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.946730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.946742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50576 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.946755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.946779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.946790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51096 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.946803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.946841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.946866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51104 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.946879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.946902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.946911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51112 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.946923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.946946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.946956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51120 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.946968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.946981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.946990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.947000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51128 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.947012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.947026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.947035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.947045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51136 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.947057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.947077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.947087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.947097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51144 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.947109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.947122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.947131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.947141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51152 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.947169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.947182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.947192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.947203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50584 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.947216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.947229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.947239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.947249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50592 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.947263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.947276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.812 [2024-07-15 19:53:14.947286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.812 [2024-07-15 19:53:14.947296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50600 len:8 PRP1 0x0 PRP2 0x0 00:15:26.812 [2024-07-15 19:53:14.947320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.812 [2024-07-15 19:53:14.947335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.813 [2024-07-15 19:53:14.947345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.813 [2024-07-15 19:53:14.947355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50608 len:8 PRP1 0x0 PRP2 0x0 00:15:26.813 [2024-07-15 19:53:14.947369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.813 [2024-07-15 19:53:14.947382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.813 [2024-07-15 19:53:14.947391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.813 [2024-07-15 19:53:14.947402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50616 len:8 PRP1 0x0 PRP2 0x0 00:15:26.813 [2024-07-15 19:53:14.947415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.813 [2024-07-15 19:53:14.947442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.813 [2024-07-15 19:53:14.947452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.813 [2024-07-15 19:53:14.947463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50624 len:8 PRP1 0x0 PRP2 0x0 00:15:26.813 [2024-07-15 19:53:14.947483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.813 [2024-07-15 19:53:14.947498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.813 [2024-07-15 19:53:14.947508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.813 [2024-07-15 19:53:14.947518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50632 len:8 PRP1 0x0 PRP2 0x0 00:15:26.813 [2024-07-15 19:53:14.947531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.813 [2024-07-15 19:53:14.947545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:26.813 [2024-07-15 19:53:14.947555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:26.813 [2024-07-15 19:53:14.947565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50640 len:8 PRP1 0x0 PRP2 0x0 00:15:26.813 [2024-07-15 19:53:14.947578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.813 [2024-07-15 19:53:14.947634] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1268d90 was disconnected and freed. reset controller. 00:15:26.813 [2024-07-15 19:53:14.947652] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:26.813 [2024-07-15 19:53:14.947706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.813 [2024-07-15 19:53:14.947727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.813 [2024-07-15 19:53:14.947742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.813 [2024-07-15 19:53:14.947756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.813 [2024-07-15 19:53:14.947770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.813 [2024-07-15 19:53:14.947798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.813 [2024-07-15 19:53:14.947811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.813 [2024-07-15 19:53:14.947824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.813 [2024-07-15 19:53:14.947837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:26.813 [2024-07-15 19:53:14.947871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ea710 (9): Bad file descriptor 00:15:26.813 [2024-07-15 19:53:14.951760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:26.813 [2024-07-15 19:53:14.986652] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:26.813 00:15:26.813 Latency(us) 00:15:26.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.813 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.813 Verification LBA range: start 0x0 length 0x4000 00:15:26.813 NVMe0n1 : 15.01 8931.04 34.89 214.61 0.00 13963.61 647.91 16324.42 00:15:26.813 =================================================================================================================== 00:15:26.813 Total : 8931.04 34.89 214.61 0.00 13963.61 647.91 16324.42 00:15:26.813 Received shutdown signal, test time was about 15.000000 seconds 00:15:26.813 00:15:26.813 Latency(us) 00:15:26.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.813 =================================================================================================================== 00:15:26.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:26.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76084 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76084 /var/tmp/bdevperf.sock 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76084 ']' 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.813 19:53:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:27.748 19:53:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.748 19:53:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:27.748 19:53:21 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:28.006 [2024-07-15 19:53:22.033424] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:28.006 19:53:22 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:28.264 [2024-07-15 19:53:22.273612] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:28.264 19:53:22 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:28.522 NVMe0n1 00:15:28.522 19:53:22 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:28.781 00:15:28.781 19:53:22 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.047 00:15:29.048 19:53:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:29.048 19:53:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:29.306 19:53:23 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.564 19:53:23 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:32.849 19:53:26 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:32.849 19:53:26 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:32.849 19:53:26 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76165 00:15:32.849 19:53:26 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:32.849 19:53:26 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76165 00:15:34.225 0 00:15:34.225 19:53:28 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:34.225 [2024-07-15 19:53:20.839401] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:15:34.225 [2024-07-15 19:53:20.839571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76084 ] 00:15:34.225 [2024-07-15 19:53:20.973971] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.225 [2024-07-15 19:53:21.085259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.225 [2024-07-15 19:53:21.145730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:34.225 [2024-07-15 19:53:23.702467] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:34.225 [2024-07-15 19:53:23.702609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.225 [2024-07-15 19:53:23.702633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.225 [2024-07-15 19:53:23.702652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.225 [2024-07-15 19:53:23.702666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.225 [2024-07-15 19:53:23.702679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.225 [2024-07-15 19:53:23.702692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.225 [2024-07-15 19:53:23.702705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.225 [2024-07-15 19:53:23.702718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.225 [2024-07-15 19:53:23.702731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:34.225 [2024-07-15 19:53:23.702780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:34.225 [2024-07-15 19:53:23.702825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157c710 (9): Bad file descriptor 00:15:34.225 [2024-07-15 19:53:23.711834] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:34.225 Running I/O for 1 seconds... 00:15:34.225 00:15:34.225 Latency(us) 00:15:34.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.225 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:34.225 Verification LBA range: start 0x0 length 0x4000 00:15:34.225 NVMe0n1 : 1.01 8284.16 32.36 0.00 0.00 15362.43 1772.45 18230.92 00:15:34.225 =================================================================================================================== 00:15:34.225 Total : 8284.16 32.36 0.00 0.00 15362.43 1772.45 18230.92 00:15:34.225 19:53:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:34.225 19:53:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:34.225 19:53:28 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:34.483 19:53:28 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:34.483 19:53:28 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:34.741 19:53:28 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:34.999 19:53:29 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76084 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76084 ']' 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76084 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76084 00:15:38.284 killing process with pid 76084 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76084' 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76084 00:15:38.284 19:53:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76084 00:15:38.543 19:53:32 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:38.543 19:53:32 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.802 19:53:32 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:38.802 19:53:32 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:38.802 19:53:33 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:38.802 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.802 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:38.802 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.802 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:38.802 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.802 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.802 rmmod nvme_tcp 00:15:38.802 rmmod nvme_fabrics 00:15:39.060 rmmod nvme_keyring 00:15:39.060 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.060 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:39.060 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:39.060 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75829 ']' 00:15:39.060 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75829 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75829 ']' 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75829 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75829 00:15:39.061 killing process with pid 75829 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75829' 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75829 00:15:39.061 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75829 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:39.318 00:15:39.318 real 0m32.986s 00:15:39.318 user 2m7.593s 00:15:39.318 sys 0m5.732s 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:39.318 19:53:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:39.318 ************************************ 00:15:39.318 END TEST nvmf_failover 00:15:39.318 ************************************ 00:15:39.318 19:53:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:39.318 19:53:33 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:39.318 19:53:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:39.318 19:53:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.318 19:53:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:39.318 ************************************ 00:15:39.318 START TEST nvmf_host_discovery 00:15:39.318 ************************************ 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:39.318 * Looking for test storage... 00:15:39.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.318 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:39.576 Cannot find device "nvmf_tgt_br" 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.576 Cannot find device "nvmf_tgt_br2" 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:39.576 Cannot find device "nvmf_tgt_br" 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:39.576 Cannot find device "nvmf_tgt_br2" 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:39.576 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.577 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.577 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.577 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:39.577 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:39.577 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:39.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:39.834 00:15:39.834 --- 10.0.0.2 ping statistics --- 00:15:39.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.834 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:39.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:39.834 00:15:39.834 --- 10.0.0.3 ping statistics --- 00:15:39.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.834 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:39.834 00:15:39.834 --- 10.0.0.1 ping statistics --- 00:15:39.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.834 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76433 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76433 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76433 ']' 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.834 19:53:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.834 [2024-07-15 19:53:33.962956] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:15:39.834 [2024-07-15 19:53:33.963088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.093 [2024-07-15 19:53:34.099016] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.093 [2024-07-15 19:53:34.208426] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.093 [2024-07-15 19:53:34.208499] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.093 [2024-07-15 19:53:34.208525] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.093 [2024-07-15 19:53:34.208533] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.093 [2024-07-15 19:53:34.208540] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.093 [2024-07-15 19:53:34.208564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.093 [2024-07-15 19:53:34.261615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 [2024-07-15 19:53:34.960986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 [2024-07-15 19:53:34.973122] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 null0 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 null1 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.029 19:53:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76465 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76465 /tmp/host.sock 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76465 ']' 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.029 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.029 19:53:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 [2024-07-15 19:53:35.052940] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:15:41.029 [2024-07-15 19:53:35.053048] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76465 ] 00:15:41.029 [2024-07-15 19:53:35.191535] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.287 [2024-07-15 19:53:35.324036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.287 [2024-07-15 19:53:35.381692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:41.855 19:53:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.855 19:53:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:41.855 19:53:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:41.855 19:53:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:41.855 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.114 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.115 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.115 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.447 [2024-07-15 19:53:36.417524] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.447 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.705 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:15:42.705 19:53:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:15:42.962 [2024-07-15 19:53:37.016515] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:42.962 [2024-07-15 19:53:37.016568] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:42.962 [2024-07-15 19:53:37.016587] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:42.963 [2024-07-15 19:53:37.022562] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:42.963 [2024-07-15 19:53:37.080163] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:42.963 [2024-07-15 19:53:37.080189] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:43.528 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:43.785 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.786 19:53:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.786 [2024-07-15 19:53:38.019562] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:43.786 [2024-07-15 19:53:38.019937] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:43.786 [2024-07-15 19:53:38.019989] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:43.786 [2024-07-15 19:53:38.025931] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:43.786 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:44.043 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:44.044 [2024-07-15 19:53:38.090227] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:44.044 [2024-07-15 19:53:38.090254] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:44.044 [2024-07-15 19:53:38.090261] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.044 [2024-07-15 19:53:38.260713] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:44.044 [2024-07-15 19:53:38.260758] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:44.044 [2024-07-15 19:53:38.266702] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:44.044 [2024-07-15 19:53:38.266734] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:44.044 [2024-07-15 19:53:38.266880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.044 [2024-07-15 19:53:38.266916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.044 [2024-07-15 19:53:38.266938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.044 [2024-07-15 19:53:38.266947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.044 [2024-07-15 19:53:38.266956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.044 [2024-07-15 19:53:38.266973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.044 [2024-07-15 19:53:38.266982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.044 [2024-07-15 19:53:38.266990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.044 [2024-07-15 19:53:38.266998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ffa0 is same with the state(5) to be set 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:44.044 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.302 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:44.560 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.561 19:53:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:44.561 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.561 19:53:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.546 [2024-07-15 19:53:39.747298] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:45.546 [2024-07-15 19:53:39.747333] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:45.546 [2024-07-15 19:53:39.747351] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:45.546 [2024-07-15 19:53:39.753347] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:45.804 [2024-07-15 19:53:39.813843] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:45.804 [2024-07-15 19:53:39.813884] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.804 request: 00:15:45.804 { 00:15:45.804 "name": "nvme", 00:15:45.804 "trtype": "tcp", 00:15:45.804 "traddr": "10.0.0.2", 00:15:45.804 "adrfam": "ipv4", 00:15:45.804 "trsvcid": "8009", 00:15:45.804 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:45.804 "wait_for_attach": true, 00:15:45.804 "method": "bdev_nvme_start_discovery", 00:15:45.804 "req_id": 1 00:15:45.804 } 00:15:45.804 Got JSON-RPC error response 00:15:45.804 response: 00:15:45.804 { 00:15:45.804 "code": -17, 00:15:45.804 "message": "File exists" 00:15:45.804 } 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.804 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.805 request: 00:15:45.805 { 00:15:45.805 "name": "nvme_second", 00:15:45.805 "trtype": "tcp", 00:15:45.805 "traddr": "10.0.0.2", 00:15:45.805 "adrfam": "ipv4", 00:15:45.805 "trsvcid": "8009", 00:15:45.805 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:45.805 "wait_for_attach": true, 00:15:45.805 "method": "bdev_nvme_start_discovery", 00:15:45.805 "req_id": 1 00:15:45.805 } 00:15:45.805 Got JSON-RPC error response 00:15:45.805 response: 00:15:45.805 { 00:15:45.805 "code": -17, 00:15:45.805 "message": "File exists" 00:15:45.805 } 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.805 19:53:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.805 19:53:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:45.805 19:53:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:45.805 19:53:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.805 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.805 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.805 19:53:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:45.805 19:53:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:45.805 19:53:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.063 19:53:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.996 [2024-07-15 19:53:41.078656] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:46.996 [2024-07-15 19:53:41.078757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbcdc0 with addr=10.0.0.2, port=8010 00:15:46.996 [2024-07-15 19:53:41.078798] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:46.996 [2024-07-15 19:53:41.078809] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:46.996 [2024-07-15 19:53:41.078818] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:47.931 [2024-07-15 19:53:42.078695] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:47.931 [2024-07-15 19:53:42.078804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbcdc0 with addr=10.0.0.2, port=8010 00:15:47.931 [2024-07-15 19:53:42.078830] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:47.931 [2024-07-15 19:53:42.078840] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:47.931 [2024-07-15 19:53:42.078848] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:48.867 [2024-07-15 19:53:43.078530] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:48.867 request: 00:15:48.867 { 00:15:48.867 "name": "nvme_second", 00:15:48.867 "trtype": "tcp", 00:15:48.867 "traddr": "10.0.0.2", 00:15:48.867 "adrfam": "ipv4", 00:15:48.867 "trsvcid": "8010", 00:15:48.867 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:48.867 "wait_for_attach": false, 00:15:48.867 "attach_timeout_ms": 3000, 00:15:48.867 "method": "bdev_nvme_start_discovery", 00:15:48.867 "req_id": 1 00:15:48.867 } 00:15:48.867 Got JSON-RPC error response 00:15:48.867 response: 00:15:48.867 { 00:15:48.867 "code": -110, 00:15:48.867 "message": "Connection timed out" 00:15:48.867 } 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:48.867 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76465 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.126 rmmod nvme_tcp 00:15:49.126 rmmod nvme_fabrics 00:15:49.126 rmmod nvme_keyring 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76433 ']' 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76433 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76433 ']' 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76433 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76433 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:49.126 killing process with pid 76433 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76433' 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76433 00:15:49.126 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76433 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:49.384 00:15:49.384 real 0m10.070s 00:15:49.384 user 0m19.492s 00:15:49.384 sys 0m1.982s 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.384 ************************************ 00:15:49.384 END TEST nvmf_host_discovery 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.384 ************************************ 00:15:49.384 19:53:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:49.384 19:53:43 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:49.384 19:53:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:49.384 19:53:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.384 19:53:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:49.384 ************************************ 00:15:49.384 START TEST nvmf_host_multipath_status 00:15:49.384 ************************************ 00:15:49.384 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:49.643 * Looking for test storage... 00:15:49.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.643 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:49.644 Cannot find device "nvmf_tgt_br" 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.644 Cannot find device "nvmf_tgt_br2" 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:49.644 Cannot find device "nvmf_tgt_br" 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:49.644 Cannot find device "nvmf_tgt_br2" 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:49.644 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:49.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:15:49.903 00:15:49.903 --- 10.0.0.2 ping statistics --- 00:15:49.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.903 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:49.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:15:49.903 00:15:49.903 --- 10.0.0.3 ping statistics --- 00:15:49.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.903 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:49.903 00:15:49.903 --- 10.0.0.1 ping statistics --- 00:15:49.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.903 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:49.903 19:53:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76919 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76919 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76919 ']' 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.903 19:53:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:49.904 [2024-07-15 19:53:44.085607] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:15:49.904 [2024-07-15 19:53:44.085724] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.163 [2024-07-15 19:53:44.224784] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:50.163 [2024-07-15 19:53:44.327225] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.163 [2024-07-15 19:53:44.327298] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.163 [2024-07-15 19:53:44.327310] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.163 [2024-07-15 19:53:44.327317] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.163 [2024-07-15 19:53:44.327324] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.163 [2024-07-15 19:53:44.327878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.163 [2024-07-15 19:53:44.327922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.163 [2024-07-15 19:53:44.384286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:51.098 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.098 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:51.098 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.098 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.098 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:51.098 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.098 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76919 00:15:51.098 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:51.356 [2024-07-15 19:53:45.377755] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.356 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:51.614 Malloc0 00:15:51.614 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:51.872 19:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.872 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.130 [2024-07-15 19:53:46.285289] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.130 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:52.389 [2024-07-15 19:53:46.481255] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76969 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76969 /var/tmp/bdevperf.sock 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76969 ']' 00:15:52.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.389 19:53:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:53.350 19:53:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.350 19:53:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:53.350 19:53:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:53.607 19:53:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:53.865 Nvme0n1 00:15:53.865 19:53:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:54.124 Nvme0n1 00:15:54.124 19:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:54.124 19:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:56.654 19:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:56.654 19:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:56.654 19:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:56.654 19:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:57.588 19:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:57.588 19:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:57.588 19:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.588 19:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:57.846 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.846 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:57.846 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.846 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:58.105 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:58.105 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:58.105 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.105 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:58.362 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.362 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:58.362 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.362 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:58.620 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.620 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:58.620 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.620 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:58.888 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.888 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:58.888 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:58.888 19:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.187 19:53:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:59.187 19:53:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:59.187 19:53:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:59.448 19:53:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:59.706 19:53:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:00.643 19:53:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:00.643 19:53:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:00.643 19:53:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:00.643 19:53:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.899 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:00.900 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:00.900 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.900 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:01.157 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.157 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:01.157 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:01.157 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.414 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.414 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:01.414 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.414 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:01.671 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.671 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:01.671 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.671 19:53:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:01.929 19:53:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.929 19:53:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:01.929 19:53:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:01.929 19:53:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.187 19:53:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.187 19:53:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:02.187 19:53:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:02.445 19:53:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:02.703 19:53:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:03.636 19:53:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:03.636 19:53:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:03.636 19:53:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.636 19:53:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:03.898 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:03.898 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:03.898 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.898 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.463 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.463 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.463 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.463 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:04.463 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.463 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:04.463 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.463 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:04.720 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.720 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:04.720 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:04.720 19:53:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.978 19:53:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.978 19:53:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:04.978 19:53:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.978 19:53:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:05.272 19:53:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.272 19:53:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:05.272 19:53:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:05.531 19:53:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:05.789 19:53:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:06.724 19:54:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:06.724 19:54:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:06.724 19:54:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.724 19:54:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:06.983 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.983 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:06.983 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.983 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:07.242 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:07.242 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:07.242 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.242 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:07.499 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.499 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:07.499 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.499 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:07.757 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.757 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:07.757 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.757 19:54:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:08.016 19:54:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.016 19:54:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:08.016 19:54:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:08.016 19:54:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.275 19:54:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:08.275 19:54:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:08.275 19:54:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:08.533 19:54:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:08.850 19:54:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:09.789 19:54:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:09.789 19:54:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:09.789 19:54:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.789 19:54:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:10.048 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:10.048 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:10.048 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.048 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:10.307 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:10.307 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:10.307 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.307 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:10.566 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.566 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:10.566 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.566 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:10.825 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.825 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:10.825 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.825 19:54:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:11.083 19:54:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:11.083 19:54:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:11.083 19:54:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:11.083 19:54:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.342 19:54:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:11.342 19:54:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:11.342 19:54:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:11.601 19:54:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:11.601 19:54:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:12.977 19:54:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:12.977 19:54:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:12.977 19:54:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.977 19:54:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:12.977 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:12.977 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:12.977 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.977 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:13.236 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.236 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:13.236 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.236 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:13.494 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.494 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:13.494 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:13.494 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.753 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.753 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:13.753 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.753 19:54:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:14.024 19:54:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:14.024 19:54:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:14.024 19:54:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.024 19:54:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:14.309 19:54:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.309 19:54:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:14.568 19:54:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:14.568 19:54:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:14.826 19:54:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:14.826 19:54:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:16.202 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:16.203 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:16.203 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.203 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:16.203 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.203 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:16.203 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.203 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:16.461 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.461 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:16.461 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:16.461 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.719 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.719 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:16.719 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.719 19:54:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:16.977 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.977 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:16.977 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.977 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:17.236 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.236 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:17.236 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.236 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:17.494 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.494 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:17.494 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:17.753 19:54:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:18.011 19:54:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:18.945 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:18.945 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:18.945 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.945 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:19.202 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:19.202 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:19.202 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:19.202 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.459 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.459 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:19.459 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.459 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:19.735 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.735 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:19.735 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.735 19:54:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:19.998 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.998 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:19.998 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.998 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:20.256 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.256 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:20.256 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.256 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:20.515 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.515 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:20.515 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:20.773 19:54:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:21.030 19:54:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:21.962 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:21.962 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:21.962 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.962 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:22.219 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.219 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:22.219 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.219 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:22.475 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.475 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:22.475 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:22.475 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.734 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.734 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:22.734 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.734 19:54:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:22.992 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.992 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:22.992 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.992 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:23.250 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.250 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:23.250 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.250 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:23.250 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.250 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:23.250 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:23.507 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:23.766 19:54:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:25.139 19:54:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:25.139 19:54:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:25.139 19:54:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.139 19:54:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:25.139 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.139 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:25.139 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:25.139 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.404 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:25.404 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:25.404 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.404 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:25.685 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.685 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:25.685 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.685 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:25.944 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.944 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:25.944 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.944 19:54:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:26.242 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.242 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:26.242 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:26.242 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.242 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:26.243 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76969 00:16:26.243 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76969 ']' 00:16:26.243 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76969 00:16:26.243 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:26.502 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.502 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76969 00:16:26.502 killing process with pid 76969 00:16:26.502 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:26.502 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:26.502 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76969' 00:16:26.502 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76969 00:16:26.502 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76969 00:16:26.502 Connection closed with partial response: 00:16:26.502 00:16:26.502 00:16:26.764 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76969 00:16:26.764 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:26.764 [2024-07-15 19:53:46.554686] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:16:26.764 [2024-07-15 19:53:46.554785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76969 ] 00:16:26.764 [2024-07-15 19:53:46.695210] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.764 [2024-07-15 19:53:46.812891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.764 [2024-07-15 19:53:46.871040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:26.764 Running I/O for 90 seconds... 00:16:26.764 [2024-07-15 19:54:02.720152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.764 [2024-07-15 19:54:02.720592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.764 [2024-07-15 19:54:02.720656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.764 [2024-07-15 19:54:02.720719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.764 [2024-07-15 19:54:02.720754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.764 [2024-07-15 19:54:02.720787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.764 [2024-07-15 19:54:02.720820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.764 [2024-07-15 19:54:02.720852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.764 [2024-07-15 19:54:02.720917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.720979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.720994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.721014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.721029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.721048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.721068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.721088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.721102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.721122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.721151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.721171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.721224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.721244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.721258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.721277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.721291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.721310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.721335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:26.764 [2024-07-15 19:54:02.721373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.764 [2024-07-15 19:54:02.721387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.765 [2024-07-15 19:54:02.721420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.765 [2024-07-15 19:54:02.721454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.765 [2024-07-15 19:54:02.721487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.765 [2024-07-15 19:54:02.721522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.765 [2024-07-15 19:54:02.721555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.721957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.721975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.765 [2024-07-15 19:54:02.722714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.765 [2024-07-15 19:54:02.722785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.765 [2024-07-15 19:54:02.722819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.765 [2024-07-15 19:54:02.722851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:26.765 [2024-07-15 19:54:02.722870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.765 [2024-07-15 19:54:02.722892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.722911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.722924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.722943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.722956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.722975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.722988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.723027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.723650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.723704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.723751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.723784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.723816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.723848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.723881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.723913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.723971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.723985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.724004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.724018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.724036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.724050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.724068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.724082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.724100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.724113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.724132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.724146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.724905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.766 [2024-07-15 19:54:02.724933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.724965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.724981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.725009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.725024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.725066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.725092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.725108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.725147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.725163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.766 [2024-07-15 19:54:02.725205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.766 [2024-07-15 19:54:02.725220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.725971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.725996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.726010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:02.726036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:02.726050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.966743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.966812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.966882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.966903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.966925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.966940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.966959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.966973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.966992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.767 [2024-07-15 19:54:17.967113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.767 [2024-07-15 19:54:17.967146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.767 [2024-07-15 19:54:17.967178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.767 [2024-07-15 19:54:17.967210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.767 [2024-07-15 19:54:17.967428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.767 [2024-07-15 19:54:17.967460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.767 [2024-07-15 19:54:17.967506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.767 [2024-07-15 19:54:17.967670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:26.767 [2024-07-15 19:54:17.967689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.767 [2024-07-15 19:54:17.967703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.967721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.967735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.967754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.967768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.967787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.967800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.967819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.967833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.967852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.967865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.967891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.967907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.967926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.967940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.967959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.967974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.967992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.968137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.968170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.968202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.968577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.968592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.970050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.768 [2024-07-15 19:54:17.970078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.970104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.970119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.970139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.970154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.970173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.970187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.970207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.970232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.970254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.970269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.970288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.970302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.970342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.768 [2024-07-15 19:54:17.970358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:26.768 [2024-07-15 19:54:17.970378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.769 [2024-07-15 19:54:17.970393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:26.769 [2024-07-15 19:54:17.970413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.769 [2024-07-15 19:54:17.970426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:26.769 [2024-07-15 19:54:17.970446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.769 [2024-07-15 19:54:17.970460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:26.769 [2024-07-15 19:54:17.970479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.769 [2024-07-15 19:54:17.970493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:26.769 [2024-07-15 19:54:17.970512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.769 [2024-07-15 19:54:17.970534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:26.769 [2024-07-15 19:54:17.970554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.769 [2024-07-15 19:54:17.970568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:26.769 [2024-07-15 19:54:17.970587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.769 [2024-07-15 19:54:17.970602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:26.769 [2024-07-15 19:54:17.970621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.769 [2024-07-15 19:54:17.970635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:26.769 [2024-07-15 19:54:17.970654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.769 [2024-07-15 19:54:17.970668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:26.769 [2024-07-15 19:54:17.970698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.769 [2024-07-15 19:54:17.970713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:26.769 Received shutdown signal, test time was about 32.098788 seconds 00:16:26.769 00:16:26.769 Latency(us) 00:16:26.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.769 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:26.769 Verification LBA range: start 0x0 length 0x4000 00:16:26.769 Nvme0n1 : 32.10 9317.77 36.40 0.00 0.00 13707.51 767.07 4026531.84 00:16:26.769 =================================================================================================================== 00:16:26.769 Total : 9317.77 36.40 0.00 0.00 13707.51 767.07 4026531.84 00:16:26.769 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.769 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:26.769 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:26.769 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:26.769 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.769 19:54:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:27.027 rmmod nvme_tcp 00:16:27.027 rmmod nvme_fabrics 00:16:27.027 rmmod nvme_keyring 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76919 ']' 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76919 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76919 ']' 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76919 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76919 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:27.027 killing process with pid 76919 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76919' 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76919 00:16:27.027 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76919 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:27.286 00:16:27.286 real 0m37.799s 00:16:27.286 user 2m1.826s 00:16:27.286 sys 0m11.166s 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:27.286 19:54:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:27.286 ************************************ 00:16:27.286 END TEST nvmf_host_multipath_status 00:16:27.286 ************************************ 00:16:27.286 19:54:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:27.286 19:54:21 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:27.286 19:54:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:27.286 19:54:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.286 19:54:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.286 ************************************ 00:16:27.286 START TEST nvmf_discovery_remove_ifc 00:16:27.286 ************************************ 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:27.286 * Looking for test storage... 00:16:27.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.286 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.287 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:27.546 Cannot find device "nvmf_tgt_br" 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.546 Cannot find device "nvmf_tgt_br2" 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:27.546 Cannot find device "nvmf_tgt_br" 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:27.546 Cannot find device "nvmf_tgt_br2" 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.546 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:27.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:27.805 00:16:27.805 --- 10.0.0.2 ping statistics --- 00:16:27.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.805 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:27.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:27.805 00:16:27.805 --- 10.0.0.3 ping statistics --- 00:16:27.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.805 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:27.805 00:16:27.805 --- 10.0.0.1 ping statistics --- 00:16:27.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.805 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77744 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77744 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77744 ']' 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.805 19:54:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.805 [2024-07-15 19:54:21.971461] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:16:27.805 [2024-07-15 19:54:21.971622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.064 [2024-07-15 19:54:22.121691] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.064 [2024-07-15 19:54:22.233928] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.064 [2024-07-15 19:54:22.234162] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.064 [2024-07-15 19:54:22.234180] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.064 [2024-07-15 19:54:22.234189] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.064 [2024-07-15 19:54:22.234196] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.064 [2024-07-15 19:54:22.234218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.064 [2024-07-15 19:54:22.288340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:28.999 19:54:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.999 19:54:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:28.999 19:54:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.999 19:54:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.999 19:54:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.999 [2024-07-15 19:54:23.047380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.999 [2024-07-15 19:54:23.055443] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:28.999 null0 00:16:28.999 [2024-07-15 19:54:23.087368] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77776 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77776 /tmp/host.sock 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77776 ']' 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:28.999 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.999 19:54:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.999 [2024-07-15 19:54:23.193555] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:16:29.000 [2024-07-15 19:54:23.193711] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77776 ] 00:16:29.260 [2024-07-15 19:54:23.346548] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.260 [2024-07-15 19:54:23.464299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.194 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.194 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:30.194 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:30.194 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:30.194 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.194 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.194 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.195 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:30.195 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.195 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.195 [2024-07-15 19:54:24.198492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:30.195 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.195 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:30.195 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.195 19:54:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.191 [2024-07-15 19:54:25.250464] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:31.191 [2024-07-15 19:54:25.250506] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:31.191 [2024-07-15 19:54:25.250540] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:31.191 [2024-07-15 19:54:25.256507] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:31.191 [2024-07-15 19:54:25.313467] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:31.191 [2024-07-15 19:54:25.313563] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:31.191 [2024-07-15 19:54:25.313591] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:31.192 [2024-07-15 19:54:25.313607] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:31.192 [2024-07-15 19:54:25.313633] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.192 [2024-07-15 19:54:25.319107] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24ddfd0 was disconnected and freed. delete nvme_qpair. 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.192 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.450 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:31.450 19:54:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.386 19:54:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:33.322 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.322 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.322 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.322 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.322 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.322 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.322 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.322 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.580 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:33.580 19:54:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:34.515 19:54:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:35.450 19:54:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.824 [2024-07-15 19:54:30.741460] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:36.824 [2024-07-15 19:54:30.741543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.824 [2024-07-15 19:54:30.741559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.824 [2024-07-15 19:54:30.741573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.824 [2024-07-15 19:54:30.741582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.824 [2024-07-15 19:54:30.741592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.824 [2024-07-15 19:54:30.741616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.824 [2024-07-15 19:54:30.741625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.824 [2024-07-15 19:54:30.741633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.824 [2024-07-15 19:54:30.741642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.824 [2024-07-15 19:54:30.741651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.824 [2024-07-15 19:54:30.741660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2443c60 is same with the state(5) to be set 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:36.824 19:54:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:36.824 [2024-07-15 19:54:30.751454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2443c60 (9): Bad file descriptor 00:16:36.824 [2024-07-15 19:54:30.761512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.758 [2024-07-15 19:54:31.784413] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:37.758 [2024-07-15 19:54:31.784775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2443c60 with addr=10.0.0.2, port=4420 00:16:37.758 [2024-07-15 19:54:31.784827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2443c60 is same with the state(5) to be set 00:16:37.758 [2024-07-15 19:54:31.784931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2443c60 (9): Bad file descriptor 00:16:37.758 [2024-07-15 19:54:31.785861] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:37.758 [2024-07-15 19:54:31.785924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:37.758 [2024-07-15 19:54:31.785946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:37.758 [2024-07-15 19:54:31.785981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:37.758 [2024-07-15 19:54:31.786046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:37.758 [2024-07-15 19:54:31.786070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:37.758 19:54:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:38.694 [2024-07-15 19:54:32.786137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:38.694 [2024-07-15 19:54:32.786191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:38.694 [2024-07-15 19:54:32.786219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:38.694 [2024-07-15 19:54:32.786228] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:38.694 [2024-07-15 19:54:32.786250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.694 [2024-07-15 19:54:32.786294] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:38.694 [2024-07-15 19:54:32.786362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.694 [2024-07-15 19:54:32.786378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.694 [2024-07-15 19:54:32.786407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.694 [2024-07-15 19:54:32.786416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.694 [2024-07-15 19:54:32.786425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.694 [2024-07-15 19:54:32.786449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.694 [2024-07-15 19:54:32.786475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.694 [2024-07-15 19:54:32.786484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.694 [2024-07-15 19:54:32.786494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.694 [2024-07-15 19:54:32.786502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.694 [2024-07-15 19:54:32.786511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:38.694 [2024-07-15 19:54:32.787078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2447a00 (9): Bad file descriptor 00:16:38.694 [2024-07-15 19:54:32.788088] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:38.694 [2024-07-15 19:54:32.788118] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.694 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.953 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:38.953 19:54:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:39.894 19:54:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:39.895 19:54:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.895 19:54:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.895 19:54:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:39.895 19:54:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:39.895 19:54:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:39.895 19:54:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.895 19:54:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.895 19:54:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:39.895 19:54:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:40.844 [2024-07-15 19:54:34.794791] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:40.844 [2024-07-15 19:54:34.794822] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:40.844 [2024-07-15 19:54:34.794840] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:40.844 [2024-07-15 19:54:34.800824] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:40.844 [2024-07-15 19:54:34.857027] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:40.844 [2024-07-15 19:54:34.857076] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:40.844 [2024-07-15 19:54:34.857099] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:40.844 [2024-07-15 19:54:34.857115] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:40.844 [2024-07-15 19:54:34.857123] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:40.844 [2024-07-15 19:54:34.863510] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24b32d0 was disconnected and freed. delete nvme_qpair. 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77776 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77776 ']' 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77776 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:40.844 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77776 00:16:41.103 killing process with pid 77776 00:16:41.103 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:41.103 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:41.103 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77776' 00:16:41.103 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77776 00:16:41.103 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77776 00:16:41.103 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:41.103 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.103 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.361 rmmod nvme_tcp 00:16:41.361 rmmod nvme_fabrics 00:16:41.361 rmmod nvme_keyring 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77744 ']' 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77744 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77744 ']' 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77744 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77744 00:16:41.361 killing process with pid 77744 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77744' 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77744 00:16:41.361 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77744 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:41.620 00:16:41.620 real 0m14.268s 00:16:41.620 user 0m24.706s 00:16:41.620 sys 0m2.539s 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.620 ************************************ 00:16:41.620 END TEST nvmf_discovery_remove_ifc 00:16:41.620 ************************************ 00:16:41.620 19:54:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:41.620 19:54:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:41.620 19:54:35 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:41.620 19:54:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:41.620 19:54:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.620 19:54:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.620 ************************************ 00:16:41.620 START TEST nvmf_identify_kernel_target 00:16:41.620 ************************************ 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:41.620 * Looking for test storage... 00:16:41.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:41.620 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:41.878 Cannot find device "nvmf_tgt_br" 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.878 Cannot find device "nvmf_tgt_br2" 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:41.878 Cannot find device "nvmf_tgt_br" 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:41.878 Cannot find device "nvmf_tgt_br2" 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:41.878 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:41.879 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:41.879 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.879 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:41.879 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.879 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:41.879 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:41.879 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:41.879 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:41.879 19:54:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:41.879 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:42.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:42.137 00:16:42.137 --- 10.0.0.2 ping statistics --- 00:16:42.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.137 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:42.137 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.137 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:42.137 00:16:42.137 --- 10.0.0.3 ping statistics --- 00:16:42.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.137 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:42.137 00:16:42.137 --- 10.0.0.1 ping statistics --- 00:16:42.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.137 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:42.137 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:42.138 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:42.138 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:42.138 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:42.138 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:42.138 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:42.395 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:42.395 Waiting for block devices as requested 00:16:42.395 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:42.652 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:42.652 No valid GPT data, bailing 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:42.652 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:42.653 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:42.653 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:42.653 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:42.653 No valid GPT data, bailing 00:16:42.910 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:42.910 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:42.910 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:42.910 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:42.910 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:42.910 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:42.911 No valid GPT data, bailing 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:42.911 19:54:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:42.911 No valid GPT data, bailing 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid=f7fce926-7bf5-4841-86b1-6d78480abc2c -a 10.0.0.1 -t tcp -s 4420 00:16:42.911 00:16:42.911 Discovery Log Number of Records 2, Generation counter 2 00:16:42.911 =====Discovery Log Entry 0====== 00:16:42.911 trtype: tcp 00:16:42.911 adrfam: ipv4 00:16:42.911 subtype: current discovery subsystem 00:16:42.911 treq: not specified, sq flow control disable supported 00:16:42.911 portid: 1 00:16:42.911 trsvcid: 4420 00:16:42.911 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:42.911 traddr: 10.0.0.1 00:16:42.911 eflags: none 00:16:42.911 sectype: none 00:16:42.911 =====Discovery Log Entry 1====== 00:16:42.911 trtype: tcp 00:16:42.911 adrfam: ipv4 00:16:42.911 subtype: nvme subsystem 00:16:42.911 treq: not specified, sq flow control disable supported 00:16:42.911 portid: 1 00:16:42.911 trsvcid: 4420 00:16:42.911 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:42.911 traddr: 10.0.0.1 00:16:42.911 eflags: none 00:16:42.911 sectype: none 00:16:42.911 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:42.911 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:43.169 ===================================================== 00:16:43.169 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:43.169 ===================================================== 00:16:43.169 Controller Capabilities/Features 00:16:43.169 ================================ 00:16:43.169 Vendor ID: 0000 00:16:43.169 Subsystem Vendor ID: 0000 00:16:43.169 Serial Number: eb947daebe51d85c9732 00:16:43.169 Model Number: Linux 00:16:43.169 Firmware Version: 6.7.0-68 00:16:43.169 Recommended Arb Burst: 0 00:16:43.169 IEEE OUI Identifier: 00 00 00 00:16:43.169 Multi-path I/O 00:16:43.169 May have multiple subsystem ports: No 00:16:43.169 May have multiple controllers: No 00:16:43.169 Associated with SR-IOV VF: No 00:16:43.169 Max Data Transfer Size: Unlimited 00:16:43.169 Max Number of Namespaces: 0 00:16:43.169 Max Number of I/O Queues: 1024 00:16:43.169 NVMe Specification Version (VS): 1.3 00:16:43.169 NVMe Specification Version (Identify): 1.3 00:16:43.169 Maximum Queue Entries: 1024 00:16:43.169 Contiguous Queues Required: No 00:16:43.169 Arbitration Mechanisms Supported 00:16:43.169 Weighted Round Robin: Not Supported 00:16:43.169 Vendor Specific: Not Supported 00:16:43.169 Reset Timeout: 7500 ms 00:16:43.169 Doorbell Stride: 4 bytes 00:16:43.169 NVM Subsystem Reset: Not Supported 00:16:43.169 Command Sets Supported 00:16:43.169 NVM Command Set: Supported 00:16:43.169 Boot Partition: Not Supported 00:16:43.169 Memory Page Size Minimum: 4096 bytes 00:16:43.169 Memory Page Size Maximum: 4096 bytes 00:16:43.169 Persistent Memory Region: Not Supported 00:16:43.169 Optional Asynchronous Events Supported 00:16:43.169 Namespace Attribute Notices: Not Supported 00:16:43.169 Firmware Activation Notices: Not Supported 00:16:43.169 ANA Change Notices: Not Supported 00:16:43.169 PLE Aggregate Log Change Notices: Not Supported 00:16:43.169 LBA Status Info Alert Notices: Not Supported 00:16:43.169 EGE Aggregate Log Change Notices: Not Supported 00:16:43.169 Normal NVM Subsystem Shutdown event: Not Supported 00:16:43.169 Zone Descriptor Change Notices: Not Supported 00:16:43.169 Discovery Log Change Notices: Supported 00:16:43.169 Controller Attributes 00:16:43.169 128-bit Host Identifier: Not Supported 00:16:43.169 Non-Operational Permissive Mode: Not Supported 00:16:43.169 NVM Sets: Not Supported 00:16:43.169 Read Recovery Levels: Not Supported 00:16:43.169 Endurance Groups: Not Supported 00:16:43.169 Predictable Latency Mode: Not Supported 00:16:43.169 Traffic Based Keep ALive: Not Supported 00:16:43.169 Namespace Granularity: Not Supported 00:16:43.169 SQ Associations: Not Supported 00:16:43.169 UUID List: Not Supported 00:16:43.169 Multi-Domain Subsystem: Not Supported 00:16:43.169 Fixed Capacity Management: Not Supported 00:16:43.169 Variable Capacity Management: Not Supported 00:16:43.169 Delete Endurance Group: Not Supported 00:16:43.169 Delete NVM Set: Not Supported 00:16:43.169 Extended LBA Formats Supported: Not Supported 00:16:43.169 Flexible Data Placement Supported: Not Supported 00:16:43.169 00:16:43.169 Controller Memory Buffer Support 00:16:43.169 ================================ 00:16:43.169 Supported: No 00:16:43.169 00:16:43.169 Persistent Memory Region Support 00:16:43.169 ================================ 00:16:43.169 Supported: No 00:16:43.169 00:16:43.169 Admin Command Set Attributes 00:16:43.169 ============================ 00:16:43.169 Security Send/Receive: Not Supported 00:16:43.169 Format NVM: Not Supported 00:16:43.169 Firmware Activate/Download: Not Supported 00:16:43.169 Namespace Management: Not Supported 00:16:43.169 Device Self-Test: Not Supported 00:16:43.169 Directives: Not Supported 00:16:43.169 NVMe-MI: Not Supported 00:16:43.169 Virtualization Management: Not Supported 00:16:43.169 Doorbell Buffer Config: Not Supported 00:16:43.169 Get LBA Status Capability: Not Supported 00:16:43.169 Command & Feature Lockdown Capability: Not Supported 00:16:43.169 Abort Command Limit: 1 00:16:43.169 Async Event Request Limit: 1 00:16:43.169 Number of Firmware Slots: N/A 00:16:43.169 Firmware Slot 1 Read-Only: N/A 00:16:43.169 Firmware Activation Without Reset: N/A 00:16:43.169 Multiple Update Detection Support: N/A 00:16:43.169 Firmware Update Granularity: No Information Provided 00:16:43.169 Per-Namespace SMART Log: No 00:16:43.169 Asymmetric Namespace Access Log Page: Not Supported 00:16:43.169 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:43.169 Command Effects Log Page: Not Supported 00:16:43.169 Get Log Page Extended Data: Supported 00:16:43.169 Telemetry Log Pages: Not Supported 00:16:43.169 Persistent Event Log Pages: Not Supported 00:16:43.169 Supported Log Pages Log Page: May Support 00:16:43.169 Commands Supported & Effects Log Page: Not Supported 00:16:43.169 Feature Identifiers & Effects Log Page:May Support 00:16:43.169 NVMe-MI Commands & Effects Log Page: May Support 00:16:43.169 Data Area 4 for Telemetry Log: Not Supported 00:16:43.169 Error Log Page Entries Supported: 1 00:16:43.169 Keep Alive: Not Supported 00:16:43.169 00:16:43.169 NVM Command Set Attributes 00:16:43.169 ========================== 00:16:43.169 Submission Queue Entry Size 00:16:43.169 Max: 1 00:16:43.169 Min: 1 00:16:43.169 Completion Queue Entry Size 00:16:43.169 Max: 1 00:16:43.169 Min: 1 00:16:43.170 Number of Namespaces: 0 00:16:43.170 Compare Command: Not Supported 00:16:43.170 Write Uncorrectable Command: Not Supported 00:16:43.170 Dataset Management Command: Not Supported 00:16:43.170 Write Zeroes Command: Not Supported 00:16:43.170 Set Features Save Field: Not Supported 00:16:43.170 Reservations: Not Supported 00:16:43.170 Timestamp: Not Supported 00:16:43.170 Copy: Not Supported 00:16:43.170 Volatile Write Cache: Not Present 00:16:43.170 Atomic Write Unit (Normal): 1 00:16:43.170 Atomic Write Unit (PFail): 1 00:16:43.170 Atomic Compare & Write Unit: 1 00:16:43.170 Fused Compare & Write: Not Supported 00:16:43.170 Scatter-Gather List 00:16:43.170 SGL Command Set: Supported 00:16:43.170 SGL Keyed: Not Supported 00:16:43.170 SGL Bit Bucket Descriptor: Not Supported 00:16:43.170 SGL Metadata Pointer: Not Supported 00:16:43.170 Oversized SGL: Not Supported 00:16:43.170 SGL Metadata Address: Not Supported 00:16:43.170 SGL Offset: Supported 00:16:43.170 Transport SGL Data Block: Not Supported 00:16:43.170 Replay Protected Memory Block: Not Supported 00:16:43.170 00:16:43.170 Firmware Slot Information 00:16:43.170 ========================= 00:16:43.170 Active slot: 0 00:16:43.170 00:16:43.170 00:16:43.170 Error Log 00:16:43.170 ========= 00:16:43.170 00:16:43.170 Active Namespaces 00:16:43.170 ================= 00:16:43.170 Discovery Log Page 00:16:43.170 ================== 00:16:43.170 Generation Counter: 2 00:16:43.170 Number of Records: 2 00:16:43.170 Record Format: 0 00:16:43.170 00:16:43.170 Discovery Log Entry 0 00:16:43.170 ---------------------- 00:16:43.170 Transport Type: 3 (TCP) 00:16:43.170 Address Family: 1 (IPv4) 00:16:43.170 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:43.170 Entry Flags: 00:16:43.170 Duplicate Returned Information: 0 00:16:43.170 Explicit Persistent Connection Support for Discovery: 0 00:16:43.170 Transport Requirements: 00:16:43.170 Secure Channel: Not Specified 00:16:43.170 Port ID: 1 (0x0001) 00:16:43.170 Controller ID: 65535 (0xffff) 00:16:43.170 Admin Max SQ Size: 32 00:16:43.170 Transport Service Identifier: 4420 00:16:43.170 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:43.170 Transport Address: 10.0.0.1 00:16:43.170 Discovery Log Entry 1 00:16:43.170 ---------------------- 00:16:43.170 Transport Type: 3 (TCP) 00:16:43.170 Address Family: 1 (IPv4) 00:16:43.170 Subsystem Type: 2 (NVM Subsystem) 00:16:43.170 Entry Flags: 00:16:43.170 Duplicate Returned Information: 0 00:16:43.170 Explicit Persistent Connection Support for Discovery: 0 00:16:43.170 Transport Requirements: 00:16:43.170 Secure Channel: Not Specified 00:16:43.170 Port ID: 1 (0x0001) 00:16:43.170 Controller ID: 65535 (0xffff) 00:16:43.170 Admin Max SQ Size: 32 00:16:43.170 Transport Service Identifier: 4420 00:16:43.170 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:43.170 Transport Address: 10.0.0.1 00:16:43.170 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:43.428 get_feature(0x01) failed 00:16:43.428 get_feature(0x02) failed 00:16:43.428 get_feature(0x04) failed 00:16:43.428 ===================================================== 00:16:43.428 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:43.428 ===================================================== 00:16:43.428 Controller Capabilities/Features 00:16:43.428 ================================ 00:16:43.428 Vendor ID: 0000 00:16:43.428 Subsystem Vendor ID: 0000 00:16:43.428 Serial Number: 2582ab7e06e5e0710d99 00:16:43.428 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:43.428 Firmware Version: 6.7.0-68 00:16:43.428 Recommended Arb Burst: 6 00:16:43.428 IEEE OUI Identifier: 00 00 00 00:16:43.428 Multi-path I/O 00:16:43.428 May have multiple subsystem ports: Yes 00:16:43.428 May have multiple controllers: Yes 00:16:43.428 Associated with SR-IOV VF: No 00:16:43.428 Max Data Transfer Size: Unlimited 00:16:43.428 Max Number of Namespaces: 1024 00:16:43.428 Max Number of I/O Queues: 128 00:16:43.428 NVMe Specification Version (VS): 1.3 00:16:43.428 NVMe Specification Version (Identify): 1.3 00:16:43.428 Maximum Queue Entries: 1024 00:16:43.428 Contiguous Queues Required: No 00:16:43.428 Arbitration Mechanisms Supported 00:16:43.428 Weighted Round Robin: Not Supported 00:16:43.428 Vendor Specific: Not Supported 00:16:43.428 Reset Timeout: 7500 ms 00:16:43.428 Doorbell Stride: 4 bytes 00:16:43.428 NVM Subsystem Reset: Not Supported 00:16:43.428 Command Sets Supported 00:16:43.428 NVM Command Set: Supported 00:16:43.428 Boot Partition: Not Supported 00:16:43.428 Memory Page Size Minimum: 4096 bytes 00:16:43.428 Memory Page Size Maximum: 4096 bytes 00:16:43.428 Persistent Memory Region: Not Supported 00:16:43.428 Optional Asynchronous Events Supported 00:16:43.428 Namespace Attribute Notices: Supported 00:16:43.428 Firmware Activation Notices: Not Supported 00:16:43.428 ANA Change Notices: Supported 00:16:43.428 PLE Aggregate Log Change Notices: Not Supported 00:16:43.428 LBA Status Info Alert Notices: Not Supported 00:16:43.428 EGE Aggregate Log Change Notices: Not Supported 00:16:43.428 Normal NVM Subsystem Shutdown event: Not Supported 00:16:43.428 Zone Descriptor Change Notices: Not Supported 00:16:43.428 Discovery Log Change Notices: Not Supported 00:16:43.428 Controller Attributes 00:16:43.428 128-bit Host Identifier: Supported 00:16:43.428 Non-Operational Permissive Mode: Not Supported 00:16:43.428 NVM Sets: Not Supported 00:16:43.428 Read Recovery Levels: Not Supported 00:16:43.428 Endurance Groups: Not Supported 00:16:43.428 Predictable Latency Mode: Not Supported 00:16:43.428 Traffic Based Keep ALive: Supported 00:16:43.428 Namespace Granularity: Not Supported 00:16:43.428 SQ Associations: Not Supported 00:16:43.428 UUID List: Not Supported 00:16:43.428 Multi-Domain Subsystem: Not Supported 00:16:43.428 Fixed Capacity Management: Not Supported 00:16:43.428 Variable Capacity Management: Not Supported 00:16:43.428 Delete Endurance Group: Not Supported 00:16:43.428 Delete NVM Set: Not Supported 00:16:43.428 Extended LBA Formats Supported: Not Supported 00:16:43.428 Flexible Data Placement Supported: Not Supported 00:16:43.428 00:16:43.428 Controller Memory Buffer Support 00:16:43.428 ================================ 00:16:43.428 Supported: No 00:16:43.428 00:16:43.428 Persistent Memory Region Support 00:16:43.428 ================================ 00:16:43.428 Supported: No 00:16:43.428 00:16:43.428 Admin Command Set Attributes 00:16:43.428 ============================ 00:16:43.428 Security Send/Receive: Not Supported 00:16:43.428 Format NVM: Not Supported 00:16:43.428 Firmware Activate/Download: Not Supported 00:16:43.428 Namespace Management: Not Supported 00:16:43.428 Device Self-Test: Not Supported 00:16:43.428 Directives: Not Supported 00:16:43.428 NVMe-MI: Not Supported 00:16:43.428 Virtualization Management: Not Supported 00:16:43.428 Doorbell Buffer Config: Not Supported 00:16:43.428 Get LBA Status Capability: Not Supported 00:16:43.428 Command & Feature Lockdown Capability: Not Supported 00:16:43.428 Abort Command Limit: 4 00:16:43.428 Async Event Request Limit: 4 00:16:43.428 Number of Firmware Slots: N/A 00:16:43.428 Firmware Slot 1 Read-Only: N/A 00:16:43.428 Firmware Activation Without Reset: N/A 00:16:43.428 Multiple Update Detection Support: N/A 00:16:43.428 Firmware Update Granularity: No Information Provided 00:16:43.428 Per-Namespace SMART Log: Yes 00:16:43.428 Asymmetric Namespace Access Log Page: Supported 00:16:43.428 ANA Transition Time : 10 sec 00:16:43.428 00:16:43.428 Asymmetric Namespace Access Capabilities 00:16:43.428 ANA Optimized State : Supported 00:16:43.428 ANA Non-Optimized State : Supported 00:16:43.428 ANA Inaccessible State : Supported 00:16:43.428 ANA Persistent Loss State : Supported 00:16:43.428 ANA Change State : Supported 00:16:43.428 ANAGRPID is not changed : No 00:16:43.428 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:43.428 00:16:43.428 ANA Group Identifier Maximum : 128 00:16:43.428 Number of ANA Group Identifiers : 128 00:16:43.428 Max Number of Allowed Namespaces : 1024 00:16:43.428 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:43.428 Command Effects Log Page: Supported 00:16:43.428 Get Log Page Extended Data: Supported 00:16:43.428 Telemetry Log Pages: Not Supported 00:16:43.428 Persistent Event Log Pages: Not Supported 00:16:43.428 Supported Log Pages Log Page: May Support 00:16:43.428 Commands Supported & Effects Log Page: Not Supported 00:16:43.428 Feature Identifiers & Effects Log Page:May Support 00:16:43.428 NVMe-MI Commands & Effects Log Page: May Support 00:16:43.428 Data Area 4 for Telemetry Log: Not Supported 00:16:43.428 Error Log Page Entries Supported: 128 00:16:43.428 Keep Alive: Supported 00:16:43.428 Keep Alive Granularity: 1000 ms 00:16:43.428 00:16:43.428 NVM Command Set Attributes 00:16:43.428 ========================== 00:16:43.428 Submission Queue Entry Size 00:16:43.428 Max: 64 00:16:43.428 Min: 64 00:16:43.428 Completion Queue Entry Size 00:16:43.428 Max: 16 00:16:43.428 Min: 16 00:16:43.428 Number of Namespaces: 1024 00:16:43.428 Compare Command: Not Supported 00:16:43.428 Write Uncorrectable Command: Not Supported 00:16:43.428 Dataset Management Command: Supported 00:16:43.428 Write Zeroes Command: Supported 00:16:43.428 Set Features Save Field: Not Supported 00:16:43.428 Reservations: Not Supported 00:16:43.428 Timestamp: Not Supported 00:16:43.428 Copy: Not Supported 00:16:43.428 Volatile Write Cache: Present 00:16:43.428 Atomic Write Unit (Normal): 1 00:16:43.428 Atomic Write Unit (PFail): 1 00:16:43.428 Atomic Compare & Write Unit: 1 00:16:43.428 Fused Compare & Write: Not Supported 00:16:43.428 Scatter-Gather List 00:16:43.428 SGL Command Set: Supported 00:16:43.428 SGL Keyed: Not Supported 00:16:43.428 SGL Bit Bucket Descriptor: Not Supported 00:16:43.428 SGL Metadata Pointer: Not Supported 00:16:43.428 Oversized SGL: Not Supported 00:16:43.428 SGL Metadata Address: Not Supported 00:16:43.428 SGL Offset: Supported 00:16:43.428 Transport SGL Data Block: Not Supported 00:16:43.428 Replay Protected Memory Block: Not Supported 00:16:43.428 00:16:43.428 Firmware Slot Information 00:16:43.428 ========================= 00:16:43.428 Active slot: 0 00:16:43.428 00:16:43.428 Asymmetric Namespace Access 00:16:43.428 =========================== 00:16:43.428 Change Count : 0 00:16:43.428 Number of ANA Group Descriptors : 1 00:16:43.428 ANA Group Descriptor : 0 00:16:43.428 ANA Group ID : 1 00:16:43.428 Number of NSID Values : 1 00:16:43.428 Change Count : 0 00:16:43.428 ANA State : 1 00:16:43.428 Namespace Identifier : 1 00:16:43.428 00:16:43.428 Commands Supported and Effects 00:16:43.428 ============================== 00:16:43.428 Admin Commands 00:16:43.428 -------------- 00:16:43.428 Get Log Page (02h): Supported 00:16:43.428 Identify (06h): Supported 00:16:43.428 Abort (08h): Supported 00:16:43.428 Set Features (09h): Supported 00:16:43.428 Get Features (0Ah): Supported 00:16:43.428 Asynchronous Event Request (0Ch): Supported 00:16:43.428 Keep Alive (18h): Supported 00:16:43.428 I/O Commands 00:16:43.428 ------------ 00:16:43.428 Flush (00h): Supported 00:16:43.428 Write (01h): Supported LBA-Change 00:16:43.428 Read (02h): Supported 00:16:43.428 Write Zeroes (08h): Supported LBA-Change 00:16:43.428 Dataset Management (09h): Supported 00:16:43.428 00:16:43.428 Error Log 00:16:43.428 ========= 00:16:43.428 Entry: 0 00:16:43.428 Error Count: 0x3 00:16:43.428 Submission Queue Id: 0x0 00:16:43.428 Command Id: 0x5 00:16:43.428 Phase Bit: 0 00:16:43.428 Status Code: 0x2 00:16:43.428 Status Code Type: 0x0 00:16:43.428 Do Not Retry: 1 00:16:43.428 Error Location: 0x28 00:16:43.428 LBA: 0x0 00:16:43.428 Namespace: 0x0 00:16:43.428 Vendor Log Page: 0x0 00:16:43.428 ----------- 00:16:43.428 Entry: 1 00:16:43.428 Error Count: 0x2 00:16:43.428 Submission Queue Id: 0x0 00:16:43.428 Command Id: 0x5 00:16:43.428 Phase Bit: 0 00:16:43.428 Status Code: 0x2 00:16:43.428 Status Code Type: 0x0 00:16:43.428 Do Not Retry: 1 00:16:43.428 Error Location: 0x28 00:16:43.428 LBA: 0x0 00:16:43.428 Namespace: 0x0 00:16:43.428 Vendor Log Page: 0x0 00:16:43.428 ----------- 00:16:43.428 Entry: 2 00:16:43.428 Error Count: 0x1 00:16:43.428 Submission Queue Id: 0x0 00:16:43.428 Command Id: 0x4 00:16:43.428 Phase Bit: 0 00:16:43.428 Status Code: 0x2 00:16:43.429 Status Code Type: 0x0 00:16:43.429 Do Not Retry: 1 00:16:43.429 Error Location: 0x28 00:16:43.429 LBA: 0x0 00:16:43.429 Namespace: 0x0 00:16:43.429 Vendor Log Page: 0x0 00:16:43.429 00:16:43.429 Number of Queues 00:16:43.429 ================ 00:16:43.429 Number of I/O Submission Queues: 128 00:16:43.429 Number of I/O Completion Queues: 128 00:16:43.429 00:16:43.429 ZNS Specific Controller Data 00:16:43.429 ============================ 00:16:43.429 Zone Append Size Limit: 0 00:16:43.429 00:16:43.429 00:16:43.429 Active Namespaces 00:16:43.429 ================= 00:16:43.429 get_feature(0x05) failed 00:16:43.429 Namespace ID:1 00:16:43.429 Command Set Identifier: NVM (00h) 00:16:43.429 Deallocate: Supported 00:16:43.429 Deallocated/Unwritten Error: Not Supported 00:16:43.429 Deallocated Read Value: Unknown 00:16:43.429 Deallocate in Write Zeroes: Not Supported 00:16:43.429 Deallocated Guard Field: 0xFFFF 00:16:43.429 Flush: Supported 00:16:43.429 Reservation: Not Supported 00:16:43.429 Namespace Sharing Capabilities: Multiple Controllers 00:16:43.429 Size (in LBAs): 1310720 (5GiB) 00:16:43.429 Capacity (in LBAs): 1310720 (5GiB) 00:16:43.429 Utilization (in LBAs): 1310720 (5GiB) 00:16:43.429 UUID: 9e7a5298-9c35-4cc4-9094-46a9daa56191 00:16:43.429 Thin Provisioning: Not Supported 00:16:43.429 Per-NS Atomic Units: Yes 00:16:43.429 Atomic Boundary Size (Normal): 0 00:16:43.429 Atomic Boundary Size (PFail): 0 00:16:43.429 Atomic Boundary Offset: 0 00:16:43.429 NGUID/EUI64 Never Reused: No 00:16:43.429 ANA group ID: 1 00:16:43.429 Namespace Write Protected: No 00:16:43.429 Number of LBA Formats: 1 00:16:43.429 Current LBA Format: LBA Format #00 00:16:43.429 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:43.429 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.429 rmmod nvme_tcp 00:16:43.429 rmmod nvme_fabrics 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:43.429 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:43.686 19:54:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:44.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:44.250 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:44.250 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:44.509 00:16:44.509 real 0m2.782s 00:16:44.509 user 0m0.994s 00:16:44.509 sys 0m1.311s 00:16:44.509 19:54:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.509 19:54:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.509 ************************************ 00:16:44.509 END TEST nvmf_identify_kernel_target 00:16:44.509 ************************************ 00:16:44.509 19:54:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:44.509 19:54:38 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:44.509 19:54:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:44.509 19:54:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.509 19:54:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:44.509 ************************************ 00:16:44.509 START TEST nvmf_auth_host 00:16:44.509 ************************************ 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:44.509 * Looking for test storage... 00:16:44.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.509 19:54:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:44.510 Cannot find device "nvmf_tgt_br" 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.510 Cannot find device "nvmf_tgt_br2" 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:44.510 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:44.769 Cannot find device "nvmf_tgt_br" 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:44.769 Cannot find device "nvmf_tgt_br2" 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.769 19:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:44.769 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:44.769 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:45.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:45.027 00:16:45.027 --- 10.0.0.2 ping statistics --- 00:16:45.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.027 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:45.027 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.027 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:45.027 00:16:45.027 --- 10.0.0.3 ping statistics --- 00:16:45.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.027 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:45.027 00:16:45.027 --- 10.0.0.1 ping statistics --- 00:16:45.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.027 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:45.027 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78674 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78674 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78674 ']' 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.028 19:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1495f4e316c3e2d91e59ee63c7288d79 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Qqf 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1495f4e316c3e2d91e59ee63c7288d79 0 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1495f4e316c3e2d91e59ee63c7288d79 0 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1495f4e316c3e2d91e59ee63c7288d79 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:45.963 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Qqf 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Qqf 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Qqf 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5aa5248a9661387a6a13b64df2e2a6efb19f1708d3b8abe6d821c679ff2f72fe 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.G3h 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5aa5248a9661387a6a13b64df2e2a6efb19f1708d3b8abe6d821c679ff2f72fe 3 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5aa5248a9661387a6a13b64df2e2a6efb19f1708d3b8abe6d821c679ff2f72fe 3 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5aa5248a9661387a6a13b64df2e2a6efb19f1708d3b8abe6d821c679ff2f72fe 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.G3h 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.G3h 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.G3h 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d0bf988b927b94bf4ba455cf0a79a11c8abb463c96107845 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JEH 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d0bf988b927b94bf4ba455cf0a79a11c8abb463c96107845 0 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d0bf988b927b94bf4ba455cf0a79a11c8abb463c96107845 0 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d0bf988b927b94bf4ba455cf0a79a11c8abb463c96107845 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JEH 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JEH 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.JEH 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5786fe75d821b7bdf7b9c585d5c71f5ae764b3d060b2354b 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bcP 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5786fe75d821b7bdf7b9c585d5c71f5ae764b3d060b2354b 2 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5786fe75d821b7bdf7b9c585d5c71f5ae764b3d060b2354b 2 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5786fe75d821b7bdf7b9c585d5c71f5ae764b3d060b2354b 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bcP 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bcP 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.bcP 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=71f6130154474e8e7dfc423021b82cf3 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.18O 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 71f6130154474e8e7dfc423021b82cf3 1 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 71f6130154474e8e7dfc423021b82cf3 1 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=71f6130154474e8e7dfc423021b82cf3 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:46.222 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.18O 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.18O 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.18O 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=87ff1151726f6fe050ece51d7c3d9d87 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.u48 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 87ff1151726f6fe050ece51d7c3d9d87 1 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 87ff1151726f6fe050ece51d7c3d9d87 1 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=87ff1151726f6fe050ece51d7c3d9d87 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:46.480 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.u48 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.u48 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.u48 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cff5b2b8bf2c99fcaed9e44d036056ca21af96a63617a5c9 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MsY 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cff5b2b8bf2c99fcaed9e44d036056ca21af96a63617a5c9 2 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cff5b2b8bf2c99fcaed9e44d036056ca21af96a63617a5c9 2 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cff5b2b8bf2c99fcaed9e44d036056ca21af96a63617a5c9 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MsY 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MsY 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.MsY 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7ec8f04c0dfe37e68cc6995a600fa057 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3JJ 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7ec8f04c0dfe37e68cc6995a600fa057 0 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7ec8f04c0dfe37e68cc6995a600fa057 0 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7ec8f04c0dfe37e68cc6995a600fa057 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3JJ 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3JJ 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3JJ 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=839315a5cf59021e98041320ff689907dcad89f91b6c77f3538528676c6036c3 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cjs 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 839315a5cf59021e98041320ff689907dcad89f91b6c77f3538528676c6036c3 3 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 839315a5cf59021e98041320ff689907dcad89f91b6c77f3538528676c6036c3 3 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=839315a5cf59021e98041320ff689907dcad89f91b6c77f3538528676c6036c3 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:46.481 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cjs 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cjs 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cjs 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78674 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78674 ']' 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.740 19:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Qqf 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.G3h ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.G3h 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JEH 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.bcP ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bcP 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.18O 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.u48 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u48 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.MsY 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3JJ ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3JJ 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cjs 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.999 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:47.000 19:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:47.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:47.259 Waiting for block devices as requested 00:16:47.519 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:47.519 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:48.085 No valid GPT data, bailing 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:48.085 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:48.342 No valid GPT data, bailing 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:48.342 No valid GPT data, bailing 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:48.342 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:48.342 No valid GPT data, bailing 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:48.343 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid=f7fce926-7bf5-4841-86b1-6d78480abc2c -a 10.0.0.1 -t tcp -s 4420 00:16:48.600 00:16:48.600 Discovery Log Number of Records 2, Generation counter 2 00:16:48.600 =====Discovery Log Entry 0====== 00:16:48.600 trtype: tcp 00:16:48.600 adrfam: ipv4 00:16:48.600 subtype: current discovery subsystem 00:16:48.600 treq: not specified, sq flow control disable supported 00:16:48.600 portid: 1 00:16:48.600 trsvcid: 4420 00:16:48.600 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:48.600 traddr: 10.0.0.1 00:16:48.600 eflags: none 00:16:48.600 sectype: none 00:16:48.600 =====Discovery Log Entry 1====== 00:16:48.600 trtype: tcp 00:16:48.600 adrfam: ipv4 00:16:48.600 subtype: nvme subsystem 00:16:48.600 treq: not specified, sq flow control disable supported 00:16:48.600 portid: 1 00:16:48.600 trsvcid: 4420 00:16:48.600 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:48.600 traddr: 10.0.0.1 00:16:48.600 eflags: none 00:16:48.600 sectype: none 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.600 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.601 nvme0n1 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.601 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.859 19:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.859 nvme0n1 00:16:48.859 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.859 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.859 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.859 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.859 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.860 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.118 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.118 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.118 nvme0n1 00:16:49.118 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.119 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 nvme0n1 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 nvme0n1 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:49.376 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.635 nvme0n1 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.635 19:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:49.893 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:49.893 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:16:49.893 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:49.893 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:49.893 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.893 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.894 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.152 nvme0n1 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.152 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.153 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.412 nvme0n1 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.412 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.670 nvme0n1 00:16:50.670 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.670 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.670 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.670 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.671 nvme0n1 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.671 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.930 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.931 19:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.931 nvme0n1 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.931 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.593 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.852 nvme0n1 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.852 19:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.852 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.852 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.852 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.111 nvme0n1 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.111 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.371 nvme0n1 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.371 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.631 nvme0n1 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.631 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.632 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.891 nvme0n1 00:16:52.891 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.891 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.891 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.891 19:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.891 19:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.891 19:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.797 19:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.798 19:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.056 nvme0n1 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.056 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.314 nvme0n1 00:16:55.314 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.314 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.314 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.314 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.314 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.314 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.572 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.830 nvme0n1 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.830 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.831 19:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.087 nvme0n1 00:16:56.087 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.087 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.087 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.087 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.087 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.353 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.627 nvme0n1 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.627 19:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.193 nvme0n1 00:16:57.193 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.193 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.193 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.193 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.193 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.193 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.452 19:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.019 nvme0n1 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.019 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.584 nvme0n1 00:16:58.584 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.584 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.584 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.584 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.584 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.584 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.584 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.584 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.585 19:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.148 nvme0n1 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.148 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.956 nvme0n1 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.956 19:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.957 nvme0n1 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.957 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.214 nvme0n1 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.214 nvme0n1 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:00.214 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.473 nvme0n1 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.473 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.731 nvme0n1 00:17:00.731 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.732 nvme0n1 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.732 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.990 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.990 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.990 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.990 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.990 19:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.990 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.990 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:00.990 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.990 19:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.990 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.991 nvme0n1 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.991 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.249 nvme0n1 00:17:01.249 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.249 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.249 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.249 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.250 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.508 nvme0n1 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.508 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.509 nvme0n1 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.509 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.766 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.766 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.766 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.767 nvme0n1 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.767 19:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.767 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.767 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.767 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:02.024 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.025 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.283 nvme0n1 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.283 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.542 nvme0n1 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.542 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.800 nvme0n1 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.800 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.801 19:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.060 nvme0n1 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.060 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.319 nvme0n1 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.319 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.578 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.837 nvme0n1 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:03.837 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.838 19:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.112 nvme0n1 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.112 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.370 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.371 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.629 nvme0n1 00:17:04.629 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.629 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.629 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.629 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.629 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.629 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.629 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.629 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.630 19:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.888 nvme0n1 00:17:04.888 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.888 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.888 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.888 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.888 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.888 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.147 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.714 nvme0n1 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.714 19:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.281 nvme0n1 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:06.281 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.282 19:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.863 nvme0n1 00:17:06.863 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.864 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.864 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.864 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.864 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.864 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.864 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.864 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.864 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.864 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.123 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.691 nvme0n1 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.691 19:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.257 nvme0n1 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.257 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.258 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.516 nvme0n1 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.516 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.517 nvme0n1 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.517 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 nvme0n1 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.775 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.776 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.776 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.776 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.776 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.776 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.776 19:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.776 19:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:08.776 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.776 19:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.044 nvme0n1 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.045 nvme0n1 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.045 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.302 nvme0n1 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:09.302 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.303 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.560 nvme0n1 00:17:09.560 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.560 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.560 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.560 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.560 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.560 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.561 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.819 nvme0n1 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.819 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.820 19:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.820 nvme0n1 00:17:09.820 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.820 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.820 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.820 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.820 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.820 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.078 nvme0n1 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.078 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.337 nvme0n1 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.337 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.595 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.595 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.595 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.596 nvme0n1 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.596 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.856 19:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.856 nvme0n1 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.856 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.115 nvme0n1 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.115 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.374 nvme0n1 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.374 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.633 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.892 nvme0n1 00:17:11.892 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.892 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.892 19:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.892 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.892 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.892 19:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:11.892 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.893 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.152 nvme0n1 00:17:12.152 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.152 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.152 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.152 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.152 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.152 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.411 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.671 nvme0n1 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.671 19:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.929 nvme0n1 00:17:12.929 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.929 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.929 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.929 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.929 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.929 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.189 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.447 nvme0n1 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ5NWY0ZTMxNmMzZTJkOTFlNTllZTYzYzcyODhkNznK1zGV: 00:17:13.447 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: ]] 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFhNTI0OGE5NjYxMzg3YTZhMTNiNjRkZjJlMmE2ZWZiMTlmMTcwOGQzYjhhYmU2ZDgyMWM2NzlmZjJmNzJmZZMupnA=: 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.448 19:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.014 nvme0n1 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.014 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.581 nvme0n1 00:17:14.581 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.581 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.581 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.581 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.581 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.581 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.581 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.581 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.581 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.582 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzFmNjEzMDE1NDQ3NGU4ZTdkZmM0MjMwMjFiODJjZjMuvm8t: 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: ]] 00:17:14.840 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmZjExNTE3MjZmNmZlMDUwZWNlNTFkN2MzZDlkODe+n4ZX: 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.841 19:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 nvme0n1 00:17:15.407 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.407 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.407 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.407 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.407 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.407 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.407 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.407 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZmNWIyYjhiZjJjOTlmY2FlZDllNDRkMDM2MDU2Y2EyMWFmOTZhNjM2MTdhNWM5j98SvA==: 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: ]] 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VjOGYwNGMwZGZlMzdlNjhjYzY5OTVhNjAwZmEwNTfHOQlH: 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.408 19:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.974 nvme0n1 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM5MzE1YTVjZjU5MDIxZTk4MDQxMzIwZmY2ODk5MDdkY2FkODlmOTFiNmM3N2YzNTM4NTI4Njc2YzYwMzZjMyN7iTc=: 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.974 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.541 nvme0n1 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBiZjk4OGI5MjdiOTRiZjRiYTQ1NWNmMGE3OWExMWM4YWJiNDYzYzk2MTA3ODQ1GLfDTQ==: 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTc4NmZlNzVkODIxYjdiZGY3YjljNTg1ZDVjNzFmNWFlNzY0YjNkMDYwYjIzNTRitZJ7DQ==: 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.541 request: 00:17:16.541 { 00:17:16.541 "name": "nvme0", 00:17:16.541 "trtype": "tcp", 00:17:16.541 "traddr": "10.0.0.1", 00:17:16.541 "adrfam": "ipv4", 00:17:16.541 "trsvcid": "4420", 00:17:16.541 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:16.541 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:16.541 "prchk_reftag": false, 00:17:16.541 "prchk_guard": false, 00:17:16.541 "hdgst": false, 00:17:16.541 "ddgst": false, 00:17:16.541 "method": "bdev_nvme_attach_controller", 00:17:16.541 "req_id": 1 00:17:16.541 } 00:17:16.541 Got JSON-RPC error response 00:17:16.541 response: 00:17:16.541 { 00:17:16.541 "code": -5, 00:17:16.541 "message": "Input/output error" 00:17:16.541 } 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.541 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.801 request: 00:17:16.801 { 00:17:16.801 "name": "nvme0", 00:17:16.801 "trtype": "tcp", 00:17:16.801 "traddr": "10.0.0.1", 00:17:16.801 "adrfam": "ipv4", 00:17:16.801 "trsvcid": "4420", 00:17:16.801 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:16.801 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:16.801 "prchk_reftag": false, 00:17:16.801 "prchk_guard": false, 00:17:16.801 "hdgst": false, 00:17:16.801 "ddgst": false, 00:17:16.801 "dhchap_key": "key2", 00:17:16.801 "method": "bdev_nvme_attach_controller", 00:17:16.801 "req_id": 1 00:17:16.801 } 00:17:16.801 Got JSON-RPC error response 00:17:16.801 response: 00:17:16.801 { 00:17:16.801 "code": -5, 00:17:16.801 "message": "Input/output error" 00:17:16.801 } 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.801 request: 00:17:16.801 { 00:17:16.801 "name": "nvme0", 00:17:16.801 "trtype": "tcp", 00:17:16.801 "traddr": "10.0.0.1", 00:17:16.801 "adrfam": "ipv4", 00:17:16.801 "trsvcid": "4420", 00:17:16.801 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:16.801 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:16.801 "prchk_reftag": false, 00:17:16.801 "prchk_guard": false, 00:17:16.801 "hdgst": false, 00:17:16.801 "ddgst": false, 00:17:16.801 "dhchap_key": "key1", 00:17:16.801 "dhchap_ctrlr_key": "ckey2", 00:17:16.801 "method": "bdev_nvme_attach_controller", 00:17:16.801 "req_id": 1 00:17:16.801 } 00:17:16.801 Got JSON-RPC error response 00:17:16.801 response: 00:17:16.801 { 00:17:16.801 "code": -5, 00:17:16.801 "message": "Input/output error" 00:17:16.801 } 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.801 19:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.801 rmmod nvme_tcp 00:17:16.801 rmmod nvme_fabrics 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78674 ']' 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78674 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78674 ']' 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78674 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.801 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78674 00:17:17.060 killing process with pid 78674 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78674' 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78674 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78674 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:17.060 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:17.319 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:17.319 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:17.319 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:17.319 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:17.319 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:17.319 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:17.319 19:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:17.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:17.886 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:18.145 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:18.145 19:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Qqf /tmp/spdk.key-null.JEH /tmp/spdk.key-sha256.18O /tmp/spdk.key-sha384.MsY /tmp/spdk.key-sha512.cjs /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:18.145 19:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:18.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:18.404 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:18.404 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:18.404 00:17:18.404 real 0m33.967s 00:17:18.404 user 0m31.110s 00:17:18.404 sys 0m3.668s 00:17:18.404 19:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.404 19:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.404 ************************************ 00:17:18.404 END TEST nvmf_auth_host 00:17:18.404 ************************************ 00:17:18.404 19:55:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:18.404 19:55:12 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:18.404 19:55:12 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:18.404 19:55:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.404 19:55:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.404 19:55:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.404 ************************************ 00:17:18.404 START TEST nvmf_digest 00:17:18.404 ************************************ 00:17:18.404 19:55:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:18.663 * Looking for test storage... 00:17:18.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.663 19:55:12 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:18.664 Cannot find device "nvmf_tgt_br" 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.664 Cannot find device "nvmf_tgt_br2" 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:18.664 Cannot find device "nvmf_tgt_br" 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:18.664 Cannot find device "nvmf_tgt_br2" 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.664 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:18.933 19:55:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:18.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:17:18.933 00:17:18.933 --- 10.0.0.2 ping statistics --- 00:17:18.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.933 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:18.933 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.933 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:18.933 00:17:18.933 --- 10.0.0.3 ping statistics --- 00:17:18.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.933 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:18.933 00:17:18.933 --- 10.0.0.1 ping statistics --- 00:17:18.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.933 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:18.933 19:55:13 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:18.934 ************************************ 00:17:18.934 START TEST nvmf_digest_clean 00:17:18.934 ************************************ 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:18.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80226 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80226 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80226 ']' 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.934 19:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:18.934 [2024-07-15 19:55:13.141521] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:18.934 [2024-07-15 19:55:13.141616] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.203 [2024-07-15 19:55:13.279811] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.203 [2024-07-15 19:55:13.394779] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.203 [2024-07-15 19:55:13.394838] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.203 [2024-07-15 19:55:13.394853] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.203 [2024-07-15 19:55:13.394864] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.203 [2024-07-15 19:55:13.394873] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.203 [2024-07-15 19:55:13.394909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:20.135 [2024-07-15 19:55:14.246198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:20.135 null0 00:17:20.135 [2024-07-15 19:55:14.296297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.135 [2024-07-15 19:55:14.320312] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80258 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80258 /var/tmp/bperf.sock 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80258 ']' 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:20.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.135 19:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:20.392 [2024-07-15 19:55:14.383941] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:20.392 [2024-07-15 19:55:14.384238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80258 ] 00:17:20.392 [2024-07-15 19:55:14.525796] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.650 [2024-07-15 19:55:14.642037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.216 19:55:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.216 19:55:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:21.216 19:55:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:21.216 19:55:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:21.216 19:55:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:21.475 [2024-07-15 19:55:15.620382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:21.475 19:55:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:21.475 19:55:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:22.099 nvme0n1 00:17:22.099 19:55:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:22.099 19:55:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:22.099 Running I/O for 2 seconds... 00:17:24.000 00:17:24.000 Latency(us) 00:17:24.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.000 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:24.000 nvme0n1 : 2.01 17133.76 66.93 0.00 0.00 7465.13 6911.07 17515.99 00:17:24.000 =================================================================================================================== 00:17:24.000 Total : 17133.76 66.93 0.00 0.00 7465.13 6911.07 17515.99 00:17:24.000 0 00:17:24.000 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:24.000 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:24.000 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:24.000 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:24.000 | select(.opcode=="crc32c") 00:17:24.000 | "\(.module_name) \(.executed)"' 00:17:24.000 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80258 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80258 ']' 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80258 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80258 00:17:24.259 killing process with pid 80258 00:17:24.259 Received shutdown signal, test time was about 2.000000 seconds 00:17:24.259 00:17:24.259 Latency(us) 00:17:24.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.259 =================================================================================================================== 00:17:24.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80258' 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80258 00:17:24.259 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80258 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80324 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80324 /var/tmp/bperf.sock 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80324 ']' 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:24.517 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.518 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:24.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:24.518 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.518 19:55:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:24.518 [2024-07-15 19:55:18.686959] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:24.518 [2024-07-15 19:55:18.687212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:24.518 Zero copy mechanism will not be used. 00:17:24.518 llocations --file-prefix=spdk_pid80324 ] 00:17:24.777 [2024-07-15 19:55:18.817493] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.777 [2024-07-15 19:55:18.907731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.710 19:55:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.710 19:55:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:25.710 19:55:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:25.710 19:55:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:25.710 19:55:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:25.710 [2024-07-15 19:55:19.885289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:25.710 19:55:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:25.710 19:55:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.275 nvme0n1 00:17:26.275 19:55:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:26.275 19:55:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:26.275 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:26.275 Zero copy mechanism will not be used. 00:17:26.275 Running I/O for 2 seconds... 00:17:28.175 00:17:28.175 Latency(us) 00:17:28.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.175 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:28.176 nvme0n1 : 2.00 8232.11 1029.01 0.00 0.00 1940.60 1705.43 4676.89 00:17:28.176 =================================================================================================================== 00:17:28.176 Total : 8232.11 1029.01 0.00 0.00 1940.60 1705.43 4676.89 00:17:28.176 0 00:17:28.176 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:28.176 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:28.176 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:28.176 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:28.176 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:28.176 | select(.opcode=="crc32c") 00:17:28.176 | "\(.module_name) \(.executed)"' 00:17:28.433 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:28.433 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:28.433 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:28.433 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:28.433 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80324 00:17:28.433 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80324 ']' 00:17:28.433 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80324 00:17:28.434 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:28.434 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.434 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80324 00:17:28.434 killing process with pid 80324 00:17:28.434 Received shutdown signal, test time was about 2.000000 seconds 00:17:28.434 00:17:28.434 Latency(us) 00:17:28.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.434 =================================================================================================================== 00:17:28.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.434 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:28.434 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:28.434 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80324' 00:17:28.434 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80324 00:17:28.434 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80324 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80383 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80383 /var/tmp/bperf.sock 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80383 ']' 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:28.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.692 19:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:28.692 [2024-07-15 19:55:22.867566] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:28.692 [2024-07-15 19:55:22.867780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80383 ] 00:17:28.950 [2024-07-15 19:55:23.000967] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.950 [2024-07-15 19:55:23.090538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.883 19:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.883 19:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:29.883 19:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:29.883 19:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:29.883 19:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:30.141 [2024-07-15 19:55:24.171081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:30.141 19:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:30.141 19:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:30.399 nvme0n1 00:17:30.399 19:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:30.399 19:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:30.399 Running I/O for 2 seconds... 00:17:32.928 00:17:32.928 Latency(us) 00:17:32.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.928 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.928 nvme0n1 : 2.00 18264.83 71.35 0.00 0.00 7001.46 6196.13 15371.17 00:17:32.928 =================================================================================================================== 00:17:32.928 Total : 18264.83 71.35 0.00 0.00 7001.46 6196.13 15371.17 00:17:32.928 0 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:32.928 | select(.opcode=="crc32c") 00:17:32.928 | "\(.module_name) \(.executed)"' 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80383 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80383 ']' 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80383 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.928 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80383 00:17:32.929 killing process with pid 80383 00:17:32.929 Received shutdown signal, test time was about 2.000000 seconds 00:17:32.929 00:17:32.929 Latency(us) 00:17:32.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.929 =================================================================================================================== 00:17:32.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.929 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.929 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.929 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80383' 00:17:32.929 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80383 00:17:32.929 19:55:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80383 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80439 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80439 /var/tmp/bperf.sock 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80439 ']' 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:32.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.929 19:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:33.187 [2024-07-15 19:55:27.210682] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:33.187 [2024-07-15 19:55:27.211755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80439 ] 00:17:33.187 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:33.187 Zero copy mechanism will not be used. 00:17:33.187 [2024-07-15 19:55:27.349808] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.446 [2024-07-15 19:55:27.443624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.012 19:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.012 19:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:34.012 19:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:34.012 19:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:34.012 19:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:34.335 [2024-07-15 19:55:28.330846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:34.335 19:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.335 19:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.617 nvme0n1 00:17:34.617 19:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:34.617 19:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:34.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:34.617 Zero copy mechanism will not be used. 00:17:34.617 Running I/O for 2 seconds... 00:17:37.148 00:17:37.148 Latency(us) 00:17:37.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.148 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:37.148 nvme0n1 : 2.00 6949.76 868.72 0.00 0.00 2296.92 2025.66 4706.68 00:17:37.148 =================================================================================================================== 00:17:37.148 Total : 6949.76 868.72 0.00 0.00 2296.92 2025.66 4706.68 00:17:37.148 0 00:17:37.148 19:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:37.148 19:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:37.148 19:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:37.148 19:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:37.148 | select(.opcode=="crc32c") 00:17:37.148 | "\(.module_name) \(.executed)"' 00:17:37.148 19:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80439 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80439 ']' 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80439 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80439 00:17:37.148 killing process with pid 80439 00:17:37.148 Received shutdown signal, test time was about 2.000000 seconds 00:17:37.148 00:17:37.148 Latency(us) 00:17:37.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.148 =================================================================================================================== 00:17:37.148 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80439' 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80439 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80439 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80226 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80226 ']' 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80226 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80226 00:17:37.148 killing process with pid 80226 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80226' 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80226 00:17:37.148 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80226 00:17:37.408 00:17:37.408 real 0m18.409s 00:17:37.408 user 0m35.360s 00:17:37.408 sys 0m4.702s 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.408 ************************************ 00:17:37.408 END TEST nvmf_digest_clean 00:17:37.408 ************************************ 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:37.408 ************************************ 00:17:37.408 START TEST nvmf_digest_error 00:17:37.408 ************************************ 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80522 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80522 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80522 ']' 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.408 19:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.408 [2024-07-15 19:55:31.595582] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:37.408 [2024-07-15 19:55:31.595661] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.667 [2024-07-15 19:55:31.726338] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.667 [2024-07-15 19:55:31.805939] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.667 [2024-07-15 19:55:31.806272] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.667 [2024-07-15 19:55:31.806432] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.667 [2024-07-15 19:55:31.806450] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.667 [2024-07-15 19:55:31.806458] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.667 [2024-07-15 19:55:31.806484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.235 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.235 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:38.235 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.235 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:38.235 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.493 [2024-07-15 19:55:32.522979] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.493 [2024-07-15 19:55:32.585498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:38.493 null0 00:17:38.493 [2024-07-15 19:55:32.630893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.493 [2024-07-15 19:55:32.654994] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.493 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80554 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80554 /var/tmp/bperf.sock 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80554 ']' 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:38.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.494 19:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.494 [2024-07-15 19:55:32.708520] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:38.494 [2024-07-15 19:55:32.708766] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80554 ] 00:17:38.752 [2024-07-15 19:55:32.840758] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.752 [2024-07-15 19:55:32.928424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.752 [2024-07-15 19:55:32.981100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:39.686 19:55:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:39.960 nvme0n1 00:17:39.960 19:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:39.960 19:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.960 19:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.960 19:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.960 19:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:39.960 19:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:40.219 Running I/O for 2 seconds... 00:17:40.219 [2024-07-15 19:55:34.304958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.305008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.305040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.320259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.320308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.320338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.335149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.335188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.335233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.350166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.350204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.350234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.365390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.365429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.365458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.380400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.380438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.380467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.396686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.396726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.396755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.413715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.413753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.413782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.429682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.429720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.429750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.444916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.444979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.445009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.219 [2024-07-15 19:55:34.460093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.219 [2024-07-15 19:55:34.460150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.219 [2024-07-15 19:55:34.460196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.478100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.478138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.478168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.493965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.494003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.494032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.509025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.509063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.509092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.524127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.524165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.524194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.539856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.539893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.539922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.554805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.554842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.569770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.569807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.569836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.584682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.584717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.584745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.599986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.600022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.600051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.616789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.616826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.616855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.634350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.634391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.634405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.651106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.651146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.651176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.667482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.667520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.667549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.683314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.683351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.683379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.699050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.699089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.699119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.478 [2024-07-15 19:55:34.714781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.478 [2024-07-15 19:55:34.714820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.478 [2024-07-15 19:55:34.714849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.736 [2024-07-15 19:55:34.730337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.736 [2024-07-15 19:55:34.730375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.736 [2024-07-15 19:55:34.730405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.736 [2024-07-15 19:55:34.745976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.736 [2024-07-15 19:55:34.746012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.736 [2024-07-15 19:55:34.746041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.736 [2024-07-15 19:55:34.761490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.736 [2024-07-15 19:55:34.761527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.736 [2024-07-15 19:55:34.761556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.736 [2024-07-15 19:55:34.777036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.736 [2024-07-15 19:55:34.777075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.736 [2024-07-15 19:55:34.777105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.736 [2024-07-15 19:55:34.792704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.736 [2024-07-15 19:55:34.792739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.736 [2024-07-15 19:55:34.792768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.736 [2024-07-15 19:55:34.808454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.736 [2024-07-15 19:55:34.808492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.736 [2024-07-15 19:55:34.808521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.736 [2024-07-15 19:55:34.823516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.736 [2024-07-15 19:55:34.823551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.823579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.838666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.838701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.838730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.853654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.853690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.853719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.868556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.868593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.868621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.883442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.883479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.883507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.898696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.898748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.898777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.915166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.915205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.915235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.930326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.930391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.930404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.945931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.945961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.945973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.962886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.962921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.962949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.737 [2024-07-15 19:55:34.981066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.737 [2024-07-15 19:55:34.981108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.737 [2024-07-15 19:55:34.981122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:34.996544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:34.996581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:34.996610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.011653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.011690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.011719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.026779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.026816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.026844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.041830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.041868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.041897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.057476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.057527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.057557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.072756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.072793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.072822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.087756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.087793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.087821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.102722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.102759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.102788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.117722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.117760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.117790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.132560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.132597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.132625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.147600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.147637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.147666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.162454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.162492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.162521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.177619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.177656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.177684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.192573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.192609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.995 [2024-07-15 19:55:35.192638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.995 [2024-07-15 19:55:35.207414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.995 [2024-07-15 19:55:35.207448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.996 [2024-07-15 19:55:35.207476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.996 [2024-07-15 19:55:35.222467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.996 [2024-07-15 19:55:35.222504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.996 [2024-07-15 19:55:35.222533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.996 [2024-07-15 19:55:35.237489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:40.996 [2024-07-15 19:55:35.237525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.996 [2024-07-15 19:55:35.237553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.254 [2024-07-15 19:55:35.252511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.254 [2024-07-15 19:55:35.252546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.254 [2024-07-15 19:55:35.252575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.254 [2024-07-15 19:55:35.267490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.254 [2024-07-15 19:55:35.267527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.254 [2024-07-15 19:55:35.267556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.254 [2024-07-15 19:55:35.288693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.254 [2024-07-15 19:55:35.288730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.254 [2024-07-15 19:55:35.288760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.254 [2024-07-15 19:55:35.303725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.254 [2024-07-15 19:55:35.303762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.254 [2024-07-15 19:55:35.303791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.254 [2024-07-15 19:55:35.318953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.254 [2024-07-15 19:55:35.318991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.254 [2024-07-15 19:55:35.319020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.334420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.334461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.334491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.349436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.349474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.349503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.364211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.364247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.364275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.379125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.379162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.379191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.393804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.393841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.393870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.408673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.408709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.408737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.424671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.424762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.424792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.441151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.441192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.441206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.457839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.457877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.457906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.474587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.474656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.474686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.255 [2024-07-15 19:55:35.491920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.255 [2024-07-15 19:55:35.491965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.255 [2024-07-15 19:55:35.491978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.568 [2024-07-15 19:55:35.508665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.568 [2024-07-15 19:55:35.508703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.568 [2024-07-15 19:55:35.508732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.568 [2024-07-15 19:55:35.523979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.568 [2024-07-15 19:55:35.524018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.568 [2024-07-15 19:55:35.524047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.568 [2024-07-15 19:55:35.539382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.568 [2024-07-15 19:55:35.539417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.568 [2024-07-15 19:55:35.539446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.568 [2024-07-15 19:55:35.554076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.568 [2024-07-15 19:55:35.554113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.568 [2024-07-15 19:55:35.554141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.568 [2024-07-15 19:55:35.568736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.568 [2024-07-15 19:55:35.568771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.568 [2024-07-15 19:55:35.568800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.583425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.583459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.583487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.597948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.597985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.598013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.612455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.612490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.612518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.626897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.626933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.626962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.641583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.641618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.641646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.656195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.656232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.656261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.672224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.672291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.672339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.689012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.689052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.689082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.706267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.706334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.706350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.723463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.723518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.723547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.739247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.739309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.739339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.754589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.754629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.754657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.769783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.769836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.769864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.569 [2024-07-15 19:55:35.784936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.569 [2024-07-15 19:55:35.784989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.569 [2024-07-15 19:55:35.785018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.799806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.799851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.799880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.816229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.816329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.816346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.833148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.833189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.833219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.849377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.849413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.849442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.865062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.865101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.865131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.880981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.881022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.881053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.896834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.896869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.896898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.914048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.914089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.914104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.930287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.930325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.930354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.946524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.946561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.946591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.962256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.962340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.962371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.979896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.979930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.979942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:35.996905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:35.996965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.842 [2024-07-15 19:55:35.996980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.842 [2024-07-15 19:55:36.012419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.842 [2024-07-15 19:55:36.012455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.843 [2024-07-15 19:55:36.012484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.843 [2024-07-15 19:55:36.027302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.843 [2024-07-15 19:55:36.027339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.843 [2024-07-15 19:55:36.027367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.843 [2024-07-15 19:55:36.042524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.843 [2024-07-15 19:55:36.042562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.843 [2024-07-15 19:55:36.042590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.843 [2024-07-15 19:55:36.057403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.843 [2024-07-15 19:55:36.057439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.843 [2024-07-15 19:55:36.057467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.843 [2024-07-15 19:55:36.072213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:41.843 [2024-07-15 19:55:36.072251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.843 [2024-07-15 19:55:36.072291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.087256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.087343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.087372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.102182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.102219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.102248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.117119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.117158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.117187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.132039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.132075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.132104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.147406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.147441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.147470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.162384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.162420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.162449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.178225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.178291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.178321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.195620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.195658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.195688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.211195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.211232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.211260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.226516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.226552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.226580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.241580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.241616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.241660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.256501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.256538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.256566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 [2024-07-15 19:55:36.271494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x622fc0) 00:17:42.101 [2024-07-15 19:55:36.271530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.101 [2024-07-15 19:55:36.271557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.101 00:17:42.101 Latency(us) 00:17:42.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.101 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:42.101 nvme0n1 : 2.00 16290.16 63.63 0.00 0.00 7851.85 7030.23 28716.68 00:17:42.101 =================================================================================================================== 00:17:42.101 Total : 16290.16 63.63 0.00 0.00 7851.85 7030.23 28716.68 00:17:42.101 0 00:17:42.101 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:42.101 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:42.101 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:42.101 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:42.101 | .driver_specific 00:17:42.101 | .nvme_error 00:17:42.101 | .status_code 00:17:42.101 | .command_transient_transport_error' 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 127 > 0 )) 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80554 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80554 ']' 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80554 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80554 00:17:42.359 killing process with pid 80554 00:17:42.359 Received shutdown signal, test time was about 2.000000 seconds 00:17:42.359 00:17:42.359 Latency(us) 00:17:42.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.359 =================================================================================================================== 00:17:42.359 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80554' 00:17:42.359 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80554 00:17:42.360 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80554 00:17:42.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80609 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80609 /var/tmp/bperf.sock 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80609 ']' 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.618 19:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:42.618 [2024-07-15 19:55:36.822414] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:42.618 [2024-07-15 19:55:36.822688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80609 ] 00:17:42.618 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:42.618 Zero copy mechanism will not be used. 00:17:42.877 [2024-07-15 19:55:36.952384] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.877 [2024-07-15 19:55:37.056223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.877 [2024-07-15 19:55:37.110572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:43.829 19:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.829 19:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:43.829 19:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:43.829 19:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:43.829 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:43.829 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.829 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:43.829 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.829 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.829 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.088 nvme0n1 00:17:44.347 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:44.347 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.347 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:44.348 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.348 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:44.348 19:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:44.348 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:44.348 Zero copy mechanism will not be used. 00:17:44.348 Running I/O for 2 seconds... 00:17:44.348 [2024-07-15 19:55:38.455645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.455712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.455731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.460152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.460189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.460221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.464633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.464671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.464701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.468730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.468766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.468796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.472967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.473008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.473039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.477049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.477089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.477120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.481173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.481231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.481262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.485366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.485404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.485434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.489571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.489624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.489670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.494003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.494042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.494072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.498515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.498553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.498583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.502925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.502964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.502993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.507295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.507350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.507380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.511523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.511560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.511589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.515542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.515580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.515609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.519570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.519622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.519652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.523570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.523608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.523637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.527437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.527474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.527503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.348 [2024-07-15 19:55:38.531421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.348 [2024-07-15 19:55:38.531458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.348 [2024-07-15 19:55:38.531487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.535466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.535504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.535533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.539594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.539631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.539660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.543514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.543553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.543582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.547467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.547504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.547533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.551307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.551345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.551374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.555140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.555178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.555207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.559169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.559207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.559236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.563084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.563122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.563152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.567069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.567107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.567136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.571064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.571104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.571133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.575047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.575085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.575114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.579031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.579069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.579098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.583127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.583165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.583195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.587113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.587164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.587193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.349 [2024-07-15 19:55:38.591226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.349 [2024-07-15 19:55:38.591293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.349 [2024-07-15 19:55:38.591323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.609 [2024-07-15 19:55:38.595353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.609 [2024-07-15 19:55:38.595390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.609 [2024-07-15 19:55:38.595419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.609 [2024-07-15 19:55:38.599250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.609 [2024-07-15 19:55:38.599317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.609 [2024-07-15 19:55:38.599347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.609 [2024-07-15 19:55:38.603241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.609 [2024-07-15 19:55:38.603325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.609 [2024-07-15 19:55:38.603355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.609 [2024-07-15 19:55:38.607279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.609 [2024-07-15 19:55:38.607315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.609 [2024-07-15 19:55:38.607344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.609 [2024-07-15 19:55:38.611172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.609 [2024-07-15 19:55:38.611211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.609 [2024-07-15 19:55:38.611240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.609 [2024-07-15 19:55:38.615122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.615161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.615190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.619205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.619243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.619272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.623210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.623248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.623290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.627199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.627237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.627266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.631241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.631308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.631338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.635289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.635327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.635356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.639299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.639336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.639365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.643215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.643255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.643314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.647223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.647306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.647337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.651186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.651223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.651252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.655173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.655210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.655239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.659167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.659206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.659234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.663042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.663079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.663108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.667205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.667242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.667272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.671291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.671328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.671357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.675237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.675319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.675334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.679232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.679296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.679326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.683250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.683298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.683327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.687240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.687305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.687335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.691326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.691363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.691392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.695240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.695306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.695337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.699248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.699315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.699344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.610 [2024-07-15 19:55:38.703256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.610 [2024-07-15 19:55:38.703318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.610 [2024-07-15 19:55:38.703332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.707237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.707303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.707333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.711291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.711328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.711357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.715196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.715233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.715263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.719158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.719196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.719225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.723171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.723208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.723238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.727181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.727218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.727247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.731268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.731316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.731346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.735219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.735257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.735319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.739278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.739342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.739372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.743322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.743360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.743389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.747394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.747430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.747460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.751450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.751485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.751515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.755491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.755528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.755557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.759457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.759494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.759523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.763415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.763451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.763481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.767248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.767316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.767346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.771311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.771348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.771376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.775291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.775328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.775357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.779209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.779246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.779291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.783204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.783242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.783271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.787243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.787310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.787340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.792033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.792074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.792104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.797104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.797147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.797161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.801731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.801770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.801799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.806193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.806232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.806261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.811233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.611 [2024-07-15 19:55:38.811324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.611 [2024-07-15 19:55:38.811340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.611 [2024-07-15 19:55:38.815810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.612 [2024-07-15 19:55:38.815864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.612 [2024-07-15 19:55:38.815894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.612 [2024-07-15 19:55:38.820443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.612 [2024-07-15 19:55:38.820513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.612 [2024-07-15 19:55:38.820528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.612 [2024-07-15 19:55:38.825369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.612 [2024-07-15 19:55:38.825409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.612 [2024-07-15 19:55:38.825439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.612 [2024-07-15 19:55:38.829900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.612 [2024-07-15 19:55:38.829956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.612 [2024-07-15 19:55:38.829985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.612 [2024-07-15 19:55:38.834439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.612 [2024-07-15 19:55:38.834478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.612 [2024-07-15 19:55:38.834508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.612 [2024-07-15 19:55:38.839017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.612 [2024-07-15 19:55:38.839058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.612 [2024-07-15 19:55:38.839088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.612 [2024-07-15 19:55:38.843620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.612 [2024-07-15 19:55:38.843693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.612 [2024-07-15 19:55:38.843708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.612 [2024-07-15 19:55:38.848320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.612 [2024-07-15 19:55:38.848392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.612 [2024-07-15 19:55:38.848408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.612 [2024-07-15 19:55:38.852788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.612 [2024-07-15 19:55:38.852829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.612 [2024-07-15 19:55:38.852844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.872 [2024-07-15 19:55:38.857369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.872 [2024-07-15 19:55:38.857411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.872 [2024-07-15 19:55:38.857426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.861913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.861952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.861981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.866529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.866571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.866586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.870991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.871034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.871049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.875439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.875478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.875526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.879780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.879818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.879847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.884032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.884070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.884099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.888210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.888246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.888275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.892639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.892695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.892708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.897117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.897159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.897173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.901268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.901352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.901382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.905369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.905406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.905435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.909391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.909429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.909457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.913534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.913572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.913601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.917668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.917720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.917749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.921742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.921779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.921808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.925743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.925780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.925808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.929947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.929985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.930014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.934063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.934102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.934131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.938163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.938203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.938232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.942198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.942237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.942266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.946302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.946339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.946367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.950231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.950296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.950326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.954278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.954314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.954343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.958222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.958289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.958319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.962216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.962253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.962312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.966184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.966222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.873 [2024-07-15 19:55:38.966251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.873 [2024-07-15 19:55:38.970277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.873 [2024-07-15 19:55:38.970340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:38.970370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:38.974279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:38.974315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:38.974343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:38.978575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:38.978612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:38.978626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:38.983478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:38.983532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:38.983563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:38.987783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:38.987838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:38.987867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:38.992191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:38.992234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:38.992263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:38.996666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:38.996719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:38.996749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.001130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.001171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.001201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.005442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.005480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.005525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.009743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.009781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.009810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.013934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.013972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.018165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.018203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.018233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.022553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.022591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.022606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.026895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.026934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.026964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.031462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.031501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.031515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.036214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.036252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.036314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.040746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.040783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.040812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.045252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.045306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.045322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.049744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.049783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.049813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.054181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.054222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.054253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.059100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.059157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.059171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.063692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.063733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.063762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.068017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.068055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.068084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.072200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.072238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.072267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.076346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.076382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.076411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.080583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.080621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.080650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.084779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.084817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.084846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.088983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.089021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.089051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.093085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.093126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.874 [2024-07-15 19:55:39.093156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.874 [2024-07-15 19:55:39.097353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.874 [2024-07-15 19:55:39.097389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.875 [2024-07-15 19:55:39.097418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.875 [2024-07-15 19:55:39.101374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.875 [2024-07-15 19:55:39.101410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.875 [2024-07-15 19:55:39.101440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.875 [2024-07-15 19:55:39.105452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.875 [2024-07-15 19:55:39.105489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.875 [2024-07-15 19:55:39.105518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.875 [2024-07-15 19:55:39.109538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.875 [2024-07-15 19:55:39.109576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.875 [2024-07-15 19:55:39.109605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.875 [2024-07-15 19:55:39.113581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:44.875 [2024-07-15 19:55:39.113618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.875 [2024-07-15 19:55:39.113646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.135 [2024-07-15 19:55:39.117629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.135 [2024-07-15 19:55:39.117666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.135 [2024-07-15 19:55:39.117695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.135 [2024-07-15 19:55:39.121857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.135 [2024-07-15 19:55:39.121894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.135 [2024-07-15 19:55:39.121924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.135 [2024-07-15 19:55:39.126099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.135 [2024-07-15 19:55:39.126137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.135 [2024-07-15 19:55:39.126167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.135 [2024-07-15 19:55:39.130718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.135 [2024-07-15 19:55:39.130759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.135 [2024-07-15 19:55:39.130789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.135 [2024-07-15 19:55:39.135316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.135 [2024-07-15 19:55:39.135353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.135 [2024-07-15 19:55:39.135382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.139521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.139558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.139588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.143725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.143763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.143791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.147797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.147834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.147848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.151879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.151917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.151945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.155891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.155929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.155957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.160023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.160061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.160090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.164168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.164208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.164237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.168505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.168543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.168574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.172802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.172853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.172883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.177297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.177397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.177412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.181610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.181650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.181679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.185858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.185896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.185925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.190086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.190124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.190153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.194160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.194199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.194227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.198327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.198363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.198391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.202405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.202441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.202470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.206470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.206524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.206554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.210487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.210525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.210554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.214505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.214543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.214571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.218469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.218506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.218535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.222469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.222506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.222535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.226441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.226478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.226507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.230540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.230578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.230623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.234604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.234641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.234670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.238619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.238656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.238685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.242659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.242708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.242720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.247098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.247153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.247184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.251591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.136 [2024-07-15 19:55:39.251633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.136 [2024-07-15 19:55:39.251647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.136 [2024-07-15 19:55:39.256086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.256126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.256157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.260350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.260388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.260417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.264464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.264503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.264532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.268413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.268449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.268478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.272255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.272332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.272347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.276157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.276194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.276223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.280104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.280142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.280170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.284023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.284060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.284089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.288100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.288138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.288167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.292096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.292132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.292160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.296207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.296244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.296273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.300252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.300315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.300328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.304233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.304313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.304327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.308253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.308314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.308343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.312204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.312242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.312271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.316233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.316313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.316328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.320236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.320314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.320329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.324227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.324310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.324324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.328242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.328309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.328339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.332339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.332377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.332407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.336253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.336317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.336346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.340312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.340349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.340378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.344410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.344447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.344477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.348550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.137 [2024-07-15 19:55:39.348588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.137 [2024-07-15 19:55:39.348618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.137 [2024-07-15 19:55:39.352731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.138 [2024-07-15 19:55:39.352767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.138 [2024-07-15 19:55:39.352796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.138 [2024-07-15 19:55:39.356798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.138 [2024-07-15 19:55:39.356834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.138 [2024-07-15 19:55:39.356864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.138 [2024-07-15 19:55:39.360798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.138 [2024-07-15 19:55:39.360833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.138 [2024-07-15 19:55:39.360862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.138 [2024-07-15 19:55:39.365279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.138 [2024-07-15 19:55:39.365348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.138 [2024-07-15 19:55:39.365363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.138 [2024-07-15 19:55:39.369637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.138 [2024-07-15 19:55:39.369674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.138 [2024-07-15 19:55:39.369703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.138 [2024-07-15 19:55:39.373939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.138 [2024-07-15 19:55:39.373978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.138 [2024-07-15 19:55:39.374008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.138 [2024-07-15 19:55:39.378560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.138 [2024-07-15 19:55:39.378598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.138 [2024-07-15 19:55:39.378612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.382879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.382919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.382933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.387132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.387170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.387200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.391541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.391580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.391594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.395806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.395843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.395873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.400126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.400165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.400194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.404764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.404800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.404830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.409103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.409144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.409158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.413312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.413351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.413381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.417510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.417548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.417578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.421801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.421841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.421854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.425974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.398 [2024-07-15 19:55:39.426012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.398 [2024-07-15 19:55:39.426042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.398 [2024-07-15 19:55:39.430118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.430156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.430185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.434562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.434602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.434631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.438660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.438713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.438743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.442847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.442886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.442915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.447119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.447156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.447170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.451339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.451375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.451404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.455464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.455502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.455531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.459639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.459676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.459706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.464031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.464068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.464081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.468355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.468391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.468404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.472446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.472482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.472511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.476842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.476880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.476894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.481031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.481072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.481085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.485304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.485354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.485383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.489393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.489430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.489459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.493789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.493828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.493857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.498020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.498059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.498089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.502505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.502545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.502575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.507119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.507155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.507186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.511602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.511642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.511688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.516058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.516096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.516126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.520712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.520749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.520764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.525406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.525447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.525462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.529819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.529861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.529875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.534325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.534414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.534430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.539007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.539045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.539058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.543214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.543251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.543309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.547968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.548005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.399 [2024-07-15 19:55:39.548033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.399 [2024-07-15 19:55:39.552417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.399 [2024-07-15 19:55:39.552453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.552481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.557001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.557043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.557073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.561552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.561592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.561623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.565935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.565972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.566001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.570250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.570330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.570344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.574274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.574334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.574348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.578541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.578578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.578607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.582684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.582722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.582751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.586767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.586805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.586833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.590790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.590827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.590856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.594873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.594911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.594940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.598964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.599002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.599030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.603019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.603056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.603086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.607137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.607174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.607204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.611208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.611245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.611274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.615229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.615289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.615319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.619244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.619311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.619340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.623319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.623355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.623384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.627374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.627411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.627440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.631727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.631766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.631779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.636704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.636758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.636772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.400 [2024-07-15 19:55:39.641579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.400 [2024-07-15 19:55:39.641621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.400 [2024-07-15 19:55:39.641635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.646024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.646062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.646091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.650542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.650581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.650595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.654985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.655022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.655050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.659015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.659053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.659082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.663097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.663134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.663162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.667612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.667654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.667685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.671827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.671864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.671893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.675751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.675789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.675818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.679853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.679890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.679919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.683775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.683813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.683841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.687838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.687874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.687903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.691739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.691776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.691805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.695919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.695960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.695990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.700318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.700354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.700383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.704247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.704311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.660 [2024-07-15 19:55:39.704340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.660 [2024-07-15 19:55:39.708296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.660 [2024-07-15 19:55:39.708332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.708361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.712364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.712399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.712429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.716285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.716320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.716348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.720742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.720779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.720810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.724880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.724916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.724971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.729008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.729048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.729062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.733097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.733137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.733151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.737360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.737398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.737429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.741704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.741741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.741771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.745837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.745874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.745904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.749917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.749954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.749984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.753973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.754010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.754039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.758298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.758383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.758399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.762618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.762657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.762670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.766679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.766716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.766745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.770708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.770745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.770775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.774694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.774732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.774761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.778970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.779008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.779038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.783194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.783234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.783264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.787210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.787247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.787287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.791203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.791242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.791272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.795223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.795289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.795304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.799544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.799583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.799613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.803680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.803716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.803746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.807655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.807692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.807721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.811740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.811776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.811805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.815725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.815762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.815792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.820049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.820088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.820117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.824206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.824244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.661 [2024-07-15 19:55:39.824275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.661 [2024-07-15 19:55:39.828233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.661 [2024-07-15 19:55:39.828293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.828307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.832133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.832169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.832199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.836240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.836306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.836336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.840686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.840727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.840758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.844885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.844921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.844986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.848970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.849008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.849021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.853085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.853124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.853137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.857121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.857159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.857189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.861616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.861656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.861670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.865810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.865849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.865879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.869998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.870035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.870065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.874114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.874151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.874180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.878527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.878566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.878597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.883249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.883349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.883364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.887810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.887849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.887880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.892144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.892184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.892214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.896525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.896563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.896594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.662 [2024-07-15 19:55:39.900858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.662 [2024-07-15 19:55:39.900895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.662 [2024-07-15 19:55:39.900925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.905529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.905571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.905586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.910083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.910122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.910152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.914418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.914456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.914486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.918788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.918825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.918854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.923186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.923227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.923258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.927748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.927789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.927820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.932108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.932146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.932175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.936319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.936356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.936385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.940410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.940445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.940475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.944451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.944486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.944515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.948799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.948868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.948915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.953066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.953107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.953138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.957182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.957225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.957240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.961384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.961420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.961450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.965453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.965490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.965521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.969904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.969943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.969972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.973976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.974015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.974045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.978058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.978096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.978126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.982077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.982114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.982144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.986376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.986431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.986460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.990793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.990832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.990862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.994877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.994915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.994945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:39.999040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:39.999077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:39.999107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:40.003131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:40.003169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:40.003198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:40.007654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:40.007691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.922 [2024-07-15 19:55:40.007721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.922 [2024-07-15 19:55:40.011883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.922 [2024-07-15 19:55:40.011920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.011949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.015979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.016015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.016044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.020110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.020147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.020176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.024246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.024306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.024320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.028489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.028528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.028558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.032752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.032788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.032819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.036785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.036822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.036852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.041042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.041082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.041095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.045511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.045551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.045581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.050059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.050098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.050128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.054629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.054699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.054727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.059187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.059226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.059256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.063930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.063968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.063981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.068317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.068366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.068381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.072916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.072980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.073011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.077271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.077352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.077383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.081440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.081483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.081513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.085607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.085643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.085673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.089536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.089573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.089602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.094114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.094154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.094183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.098327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.098363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.098393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.102349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.102382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.102411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.106326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.106378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.106408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.110346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.110382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.110412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.114739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.114778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.114808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.118902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.118939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.118969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.122984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.123022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.123051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.127124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.127161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.127191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.131415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.131453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.131483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.923 [2024-07-15 19:55:40.135624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.923 [2024-07-15 19:55:40.135661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.923 [2024-07-15 19:55:40.135691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.924 [2024-07-15 19:55:40.139784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.924 [2024-07-15 19:55:40.139822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.924 [2024-07-15 19:55:40.139852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.924 [2024-07-15 19:55:40.143869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.924 [2024-07-15 19:55:40.143906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.924 [2024-07-15 19:55:40.143936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.924 [2024-07-15 19:55:40.147905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.924 [2024-07-15 19:55:40.147940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.924 [2024-07-15 19:55:40.147970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.924 [2024-07-15 19:55:40.152256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.924 [2024-07-15 19:55:40.152323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.924 [2024-07-15 19:55:40.152338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.924 [2024-07-15 19:55:40.156499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.924 [2024-07-15 19:55:40.156537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.924 [2024-07-15 19:55:40.156551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.924 [2024-07-15 19:55:40.160661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.924 [2024-07-15 19:55:40.160699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.924 [2024-07-15 19:55:40.160713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.924 [2024-07-15 19:55:40.164804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:45.924 [2024-07-15 19:55:40.164841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.924 [2024-07-15 19:55:40.164871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.169381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.169417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.169447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.173602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.173657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.173687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.177804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.177842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.177871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.181895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.181933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.181962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.186153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.186193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.186223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.190503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.190542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.190571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.194521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.194557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.194587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.198490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.198527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.198556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.202565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.202603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.202633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.206808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.206846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.206876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.211030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.211068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.211098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.215105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.215143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.215173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.219261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.219326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.219357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.223611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.223650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.223680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.227828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.227867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.227896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.231905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.231942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.231971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.235914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.235949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.235979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.240236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.184 [2024-07-15 19:55:40.240316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.184 [2024-07-15 19:55:40.240347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.184 [2024-07-15 19:55:40.244559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.244594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.244624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.248568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.248603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.248633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.252511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.252546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.252576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.256500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.256536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.256565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.260928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.261008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.261023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.265086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.265125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.265155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.269141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.269181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.269211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.273479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.273519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.273550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.277917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.277955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.277984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.282011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.282058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.282086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.286267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.286348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.286363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.290448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.290486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.290499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.294968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.295005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.295034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.299188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.299227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.299257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.303314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.303352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.303382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.307541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.307598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.307628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.311828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.311867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.311897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.315940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.315978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.316007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.320111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.320149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.320178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.324519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.324557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.324571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.328898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.328958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.328989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.333100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.333141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.333154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.337170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.337210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.337240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.341646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.341684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.341713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.345778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.345815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.345846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.349927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.349964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.349993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.353961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.353999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.354028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.185 [2024-07-15 19:55:40.358362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.185 [2024-07-15 19:55:40.358400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.185 [2024-07-15 19:55:40.358429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.362602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.362640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.362670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.366844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.366882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.366910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.370813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.370851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.370879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.374759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.374797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.374841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.379212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.379252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.379298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.383379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.383416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.383445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.387429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.387465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.387495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.391371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.391409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.391439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.395649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.395688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.395718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.399941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.399979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.400009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.404088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.404125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.404154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.408224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.408290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.408321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.412616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.412668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.412699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.416830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.416866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.416895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.420924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.420987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.421001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.186 [2024-07-15 19:55:40.425025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.186 [2024-07-15 19:55:40.425066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.186 [2024-07-15 19:55:40.425080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.445 [2024-07-15 19:55:40.429480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.445 [2024-07-15 19:55:40.429519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.445 [2024-07-15 19:55:40.429548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.445 [2024-07-15 19:55:40.433780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.445 [2024-07-15 19:55:40.433818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.445 [2024-07-15 19:55:40.433847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.445 [2024-07-15 19:55:40.437827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.445 [2024-07-15 19:55:40.437864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.445 [2024-07-15 19:55:40.437894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.445 [2024-07-15 19:55:40.441895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.445 [2024-07-15 19:55:40.441933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.445 [2024-07-15 19:55:40.441962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.445 [2024-07-15 19:55:40.446487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.445 [2024-07-15 19:55:40.446527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.445 [2024-07-15 19:55:40.446557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.445 [2024-07-15 19:55:40.450548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ebfe0) 00:17:46.445 [2024-07-15 19:55:40.450587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.445 [2024-07-15 19:55:40.450617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.445 00:17:46.445 Latency(us) 00:17:46.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.445 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:46.445 nvme0n1 : 2.00 7358.55 919.82 0.00 0.00 2170.84 1750.11 5034.36 00:17:46.445 =================================================================================================================== 00:17:46.445 Total : 7358.55 919.82 0.00 0.00 2170.84 1750.11 5034.36 00:17:46.445 0 00:17:46.445 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:46.445 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:46.445 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:46.445 | .driver_specific 00:17:46.445 | .nvme_error 00:17:46.445 | .status_code 00:17:46.445 | .command_transient_transport_error' 00:17:46.445 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 475 > 0 )) 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80609 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80609 ']' 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80609 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80609 00:17:46.703 killing process with pid 80609 00:17:46.703 Received shutdown signal, test time was about 2.000000 seconds 00:17:46.703 00:17:46.703 Latency(us) 00:17:46.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.703 =================================================================================================================== 00:17:46.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80609' 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80609 00:17:46.703 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80609 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80669 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80669 /var/tmp/bperf.sock 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80669 ']' 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:46.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.961 19:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:46.961 [2024-07-15 19:55:41.054745] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:46.961 [2024-07-15 19:55:41.055064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80669 ] 00:17:46.961 [2024-07-15 19:55:41.205143] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.220 [2024-07-15 19:55:41.303239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.220 [2024-07-15 19:55:41.361833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:47.784 19:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.784 19:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:47.784 19:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:47.784 19:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:48.042 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:48.042 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.042 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.042 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.042 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.042 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.300 nvme0n1 00:17:48.300 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:48.300 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.300 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.558 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.558 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:48.558 19:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:48.558 Running I/O for 2 seconds... 00:17:48.558 [2024-07-15 19:55:42.662402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fef90 00:17:48.558 [2024-07-15 19:55:42.664735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.664776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.558 [2024-07-15 19:55:42.677391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190feb58 00:17:48.558 [2024-07-15 19:55:42.679759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.679796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:48.558 [2024-07-15 19:55:42.692038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fe2e8 00:17:48.558 [2024-07-15 19:55:42.694384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.694420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:48.558 [2024-07-15 19:55:42.706191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fda78 00:17:48.558 [2024-07-15 19:55:42.708725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.708760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:48.558 [2024-07-15 19:55:42.720425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fd208 00:17:48.558 [2024-07-15 19:55:42.722981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.723016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:48.558 [2024-07-15 19:55:42.734681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fc998 00:17:48.558 [2024-07-15 19:55:42.736894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.736926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:48.558 [2024-07-15 19:55:42.749010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fc128 00:17:48.558 [2024-07-15 19:55:42.751279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.751338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:48.558 [2024-07-15 19:55:42.763861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fb8b8 00:17:48.558 [2024-07-15 19:55:42.766131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.766165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:48.558 [2024-07-15 19:55:42.777900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fb048 00:17:48.558 [2024-07-15 19:55:42.780074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.780107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:48.558 [2024-07-15 19:55:42.791746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fa7d8 00:17:48.558 [2024-07-15 19:55:42.793937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.558 [2024-07-15 19:55:42.793971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.806684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f9f68 00:17:48.816 [2024-07-15 19:55:42.809156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.809195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.822263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f96f8 00:17:48.816 [2024-07-15 19:55:42.824535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.824570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.838042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f8e88 00:17:48.816 [2024-07-15 19:55:42.840485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.840521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.853768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f8618 00:17:48.816 [2024-07-15 19:55:42.856004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.856039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.868510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f7da8 00:17:48.816 [2024-07-15 19:55:42.870815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.870849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.883357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f7538 00:17:48.816 [2024-07-15 19:55:42.885514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.885550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.897779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f6cc8 00:17:48.816 [2024-07-15 19:55:42.899768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.899799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.911459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f6458 00:17:48.816 [2024-07-15 19:55:42.913497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.913533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.925548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f5be8 00:17:48.816 [2024-07-15 19:55:42.927478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.927527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:48.816 [2024-07-15 19:55:42.939184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f5378 00:17:48.816 [2024-07-15 19:55:42.941290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.816 [2024-07-15 19:55:42.941332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:48.817 [2024-07-15 19:55:42.953862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f4b08 00:17:48.817 [2024-07-15 19:55:42.956071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.817 [2024-07-15 19:55:42.956106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:48.817 [2024-07-15 19:55:42.969819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f4298 00:17:48.817 [2024-07-15 19:55:42.971870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.817 [2024-07-15 19:55:42.971904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:48.817 [2024-07-15 19:55:42.984357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f3a28 00:17:48.817 [2024-07-15 19:55:42.986244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.817 [2024-07-15 19:55:42.986307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:48.817 [2024-07-15 19:55:42.998607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f31b8 00:17:48.817 [2024-07-15 19:55:43.000518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.817 [2024-07-15 19:55:43.000551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:48.817 [2024-07-15 19:55:43.012820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f2948 00:17:48.817 [2024-07-15 19:55:43.014753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.817 [2024-07-15 19:55:43.014787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:48.817 [2024-07-15 19:55:43.027210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f20d8 00:17:48.817 [2024-07-15 19:55:43.029176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.817 [2024-07-15 19:55:43.029212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:48.817 [2024-07-15 19:55:43.041792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f1868 00:17:48.817 [2024-07-15 19:55:43.043917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.817 [2024-07-15 19:55:43.043950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:48.817 [2024-07-15 19:55:43.057176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f0ff8 00:17:48.817 [2024-07-15 19:55:43.059279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.817 [2024-07-15 19:55:43.059322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.072542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f0788 00:17:49.075 [2024-07-15 19:55:43.074418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.074452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.087407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190eff18 00:17:49.075 [2024-07-15 19:55:43.089240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.089338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.102134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ef6a8 00:17:49.075 [2024-07-15 19:55:43.104068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.104104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.118193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190eee38 00:17:49.075 [2024-07-15 19:55:43.120082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.120114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.133889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ee5c8 00:17:49.075 [2024-07-15 19:55:43.135708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.135755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.148284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190edd58 00:17:49.075 [2024-07-15 19:55:43.150041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.150076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.162703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ed4e8 00:17:49.075 [2024-07-15 19:55:43.164360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.164391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.176354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ecc78 00:17:49.075 [2024-07-15 19:55:43.178076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.178110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.190242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ec408 00:17:49.075 [2024-07-15 19:55:43.191928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.191960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.203955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ebb98 00:17:49.075 [2024-07-15 19:55:43.205648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.205697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.217723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190eb328 00:17:49.075 [2024-07-15 19:55:43.219322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.219355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.231436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190eaab8 00:17:49.075 [2024-07-15 19:55:43.233041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.233079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.245002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ea248 00:17:49.075 [2024-07-15 19:55:43.246627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.075 [2024-07-15 19:55:43.246661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:49.075 [2024-07-15 19:55:43.258893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e99d8 00:17:49.076 [2024-07-15 19:55:43.260420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.076 [2024-07-15 19:55:43.260452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:49.076 [2024-07-15 19:55:43.272553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e9168 00:17:49.076 [2024-07-15 19:55:43.274103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.076 [2024-07-15 19:55:43.274136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:49.076 [2024-07-15 19:55:43.286297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e88f8 00:17:49.076 [2024-07-15 19:55:43.287759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.076 [2024-07-15 19:55:43.287791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:49.076 [2024-07-15 19:55:43.299911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e8088 00:17:49.076 [2024-07-15 19:55:43.301452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.076 [2024-07-15 19:55:43.301485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:49.076 [2024-07-15 19:55:43.313627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e7818 00:17:49.076 [2024-07-15 19:55:43.315103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.076 [2024-07-15 19:55:43.315135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.327365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e6fa8 00:17:49.334 [2024-07-15 19:55:43.328862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.328894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.341198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e6738 00:17:49.334 [2024-07-15 19:55:43.342775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.342807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.355004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e5ec8 00:17:49.334 [2024-07-15 19:55:43.356500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.356533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.368791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e5658 00:17:49.334 [2024-07-15 19:55:43.370301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.370361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.382602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e4de8 00:17:49.334 [2024-07-15 19:55:43.383992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.384024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.396159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e4578 00:17:49.334 [2024-07-15 19:55:43.397715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.397749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.410408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e3d08 00:17:49.334 [2024-07-15 19:55:43.411825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.411857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.425663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e3498 00:17:49.334 [2024-07-15 19:55:43.427188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.427224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.441057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e2c28 00:17:49.334 [2024-07-15 19:55:43.442712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.442764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.457421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e23b8 00:17:49.334 [2024-07-15 19:55:43.459016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.459051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.472917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e1b48 00:17:49.334 [2024-07-15 19:55:43.474379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.474428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.487995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e12d8 00:17:49.334 [2024-07-15 19:55:43.489485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.489521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.503033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e0a68 00:17:49.334 [2024-07-15 19:55:43.504463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.504497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.518051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e01f8 00:17:49.334 [2024-07-15 19:55:43.519447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.519482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.532552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190df988 00:17:49.334 [2024-07-15 19:55:43.533964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.533997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.547808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190df118 00:17:49.334 [2024-07-15 19:55:43.549424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.549461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.562545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190de8a8 00:17:49.334 [2024-07-15 19:55:43.563819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.563852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:49.334 [2024-07-15 19:55:43.576673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190de038 00:17:49.334 [2024-07-15 19:55:43.577965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.334 [2024-07-15 19:55:43.578001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:49.592 [2024-07-15 19:55:43.597256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190de038 00:17:49.592 [2024-07-15 19:55:43.599671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.592 [2024-07-15 19:55:43.599705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.592 [2024-07-15 19:55:43.612396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190de8a8 00:17:49.592 [2024-07-15 19:55:43.614877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.592 [2024-07-15 19:55:43.614910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:49.592 [2024-07-15 19:55:43.627388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190df118 00:17:49.592 [2024-07-15 19:55:43.629785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.592 [2024-07-15 19:55:43.629817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:49.592 [2024-07-15 19:55:43.641607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190df988 00:17:49.592 [2024-07-15 19:55:43.643865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.592 [2024-07-15 19:55:43.643898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:49.592 [2024-07-15 19:55:43.655571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e01f8 00:17:49.592 [2024-07-15 19:55:43.657941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.592 [2024-07-15 19:55:43.657975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:49.592 [2024-07-15 19:55:43.669563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e0a68 00:17:49.592 [2024-07-15 19:55:43.671789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.592 [2024-07-15 19:55:43.671821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:49.592 [2024-07-15 19:55:43.684163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e12d8 00:17:49.592 [2024-07-15 19:55:43.686521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.592 [2024-07-15 19:55:43.686555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.698342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e1b48 00:17:49.593 [2024-07-15 19:55:43.700512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.700546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.712459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e23b8 00:17:49.593 [2024-07-15 19:55:43.714695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.714729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.726390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e2c28 00:17:49.593 [2024-07-15 19:55:43.728503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.728536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.740310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e3498 00:17:49.593 [2024-07-15 19:55:43.742468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.742501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.754598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e3d08 00:17:49.593 [2024-07-15 19:55:43.756723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.756756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.768869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e4578 00:17:49.593 [2024-07-15 19:55:43.770993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.771026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.783143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e4de8 00:17:49.593 [2024-07-15 19:55:43.785291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.785325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.797117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e5658 00:17:49.593 [2024-07-15 19:55:43.799256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.799295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.811329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e5ec8 00:17:49.593 [2024-07-15 19:55:43.813450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.813484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:49.593 [2024-07-15 19:55:43.825476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e6738 00:17:49.593 [2024-07-15 19:55:43.827488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.593 [2024-07-15 19:55:43.827519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:49.872 [2024-07-15 19:55:43.839327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e6fa8 00:17:49.872 [2024-07-15 19:55:43.841454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.872 [2024-07-15 19:55:43.841489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:49.872 [2024-07-15 19:55:43.853389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e7818 00:17:49.872 [2024-07-15 19:55:43.855351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.872 [2024-07-15 19:55:43.855386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:49.872 [2024-07-15 19:55:43.867250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e8088 00:17:49.872 [2024-07-15 19:55:43.869367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.872 [2024-07-15 19:55:43.869401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:49.872 [2024-07-15 19:55:43.881152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e88f8 00:17:49.872 [2024-07-15 19:55:43.883194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.872 [2024-07-15 19:55:43.883226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:49.872 [2024-07-15 19:55:43.895153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e9168 00:17:49.872 [2024-07-15 19:55:43.897264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.872 [2024-07-15 19:55:43.897340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:49.872 [2024-07-15 19:55:43.909562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190e99d8 00:17:49.872 [2024-07-15 19:55:43.911437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.872 [2024-07-15 19:55:43.911471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:49.872 [2024-07-15 19:55:43.923797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ea248 00:17:49.872 [2024-07-15 19:55:43.925865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.872 [2024-07-15 19:55:43.925898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:49.872 [2024-07-15 19:55:43.937917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190eaab8 00:17:49.872 [2024-07-15 19:55:43.939823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.872 [2024-07-15 19:55:43.939856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:49.872 [2024-07-15 19:55:43.951959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190eb328 00:17:49.873 [2024-07-15 19:55:43.954005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:43.954039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:43.966319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ebb98 00:17:49.873 [2024-07-15 19:55:43.968123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:43.968155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:43.980723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ec408 00:17:49.873 [2024-07-15 19:55:43.982638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:43.982671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:43.994931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ecc78 00:17:49.873 [2024-07-15 19:55:43.996780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:43.996813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:44.009103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ed4e8 00:17:49.873 [2024-07-15 19:55:44.011110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:44.011333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:44.023592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190edd58 00:17:49.873 [2024-07-15 19:55:44.025540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:44.025574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:44.037726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ee5c8 00:17:49.873 [2024-07-15 19:55:44.039498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:44.039531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:44.051826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190eee38 00:17:49.873 [2024-07-15 19:55:44.053642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:44.053677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:44.065974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190ef6a8 00:17:49.873 [2024-07-15 19:55:44.067735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:44.067768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:44.079908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190eff18 00:17:49.873 [2024-07-15 19:55:44.081677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:44.081725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:49.873 [2024-07-15 19:55:44.094790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f0788 00:17:49.873 [2024-07-15 19:55:44.096750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.873 [2024-07-15 19:55:44.096788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.109903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f0ff8 00:17:50.147 [2024-07-15 19:55:44.111709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.111758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.125915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f1868 00:17:50.147 [2024-07-15 19:55:44.127782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.127809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.142436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f20d8 00:17:50.147 [2024-07-15 19:55:44.144245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.144321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.158807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f2948 00:17:50.147 [2024-07-15 19:55:44.160528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.160564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.174914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f31b8 00:17:50.147 [2024-07-15 19:55:44.176653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.176716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.189755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f3a28 00:17:50.147 [2024-07-15 19:55:44.191344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.191378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.204364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f4298 00:17:50.147 [2024-07-15 19:55:44.205948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.205982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.218952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f4b08 00:17:50.147 [2024-07-15 19:55:44.220479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.220512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.235098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f5378 00:17:50.147 [2024-07-15 19:55:44.236866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.236916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.252075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f5be8 00:17:50.147 [2024-07-15 19:55:44.253750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.253788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.269042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f6458 00:17:50.147 [2024-07-15 19:55:44.270746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.270795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.286100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f6cc8 00:17:50.147 [2024-07-15 19:55:44.287858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.287891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.303137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f7538 00:17:50.147 [2024-07-15 19:55:44.304866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.304899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.320070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f7da8 00:17:50.147 [2024-07-15 19:55:44.321667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.321705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.336383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f8618 00:17:50.147 [2024-07-15 19:55:44.337911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.337962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.352438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f8e88 00:17:50.147 [2024-07-15 19:55:44.353951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.353988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.369085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f96f8 00:17:50.147 [2024-07-15 19:55:44.370639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.370677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:50.147 [2024-07-15 19:55:44.386184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190f9f68 00:17:50.147 [2024-07-15 19:55:44.387702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.147 [2024-07-15 19:55:44.387739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:50.405 [2024-07-15 19:55:44.403514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fa7d8 00:17:50.405 [2024-07-15 19:55:44.405071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.405 [2024-07-15 19:55:44.405111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:50.405 [2024-07-15 19:55:44.420866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fb048 00:17:50.405 [2024-07-15 19:55:44.422480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.405 [2024-07-15 19:55:44.422516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:50.405 [2024-07-15 19:55:44.438292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fb8b8 00:17:50.405 [2024-07-15 19:55:44.439821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.405 [2024-07-15 19:55:44.439860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:50.405 [2024-07-15 19:55:44.455716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fc128 00:17:50.405 [2024-07-15 19:55:44.457270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.405 [2024-07-15 19:55:44.457323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:50.405 [2024-07-15 19:55:44.473322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fc998 00:17:50.405 [2024-07-15 19:55:44.474818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.474854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:50.406 [2024-07-15 19:55:44.490386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fd208 00:17:50.406 [2024-07-15 19:55:44.491833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.491869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:50.406 [2024-07-15 19:55:44.507775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fda78 00:17:50.406 [2024-07-15 19:55:44.509173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.509212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:50.406 [2024-07-15 19:55:44.525273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fe2e8 00:17:50.406 [2024-07-15 19:55:44.526721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.526758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:50.406 [2024-07-15 19:55:44.542389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190feb58 00:17:50.406 [2024-07-15 19:55:44.543734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.543787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:50.406 [2024-07-15 19:55:44.567231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fef90 00:17:50.406 [2024-07-15 19:55:44.570016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.570052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.406 [2024-07-15 19:55:44.584135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190feb58 00:17:50.406 [2024-07-15 19:55:44.586896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.586934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:50.406 [2024-07-15 19:55:44.601177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fe2e8 00:17:50.406 [2024-07-15 19:55:44.603732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.603766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:50.406 [2024-07-15 19:55:44.617781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fda78 00:17:50.406 [2024-07-15 19:55:44.620251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.620316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:50.406 [2024-07-15 19:55:44.634451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f07d0) with pdu=0x2000190fd208 00:17:50.406 [2024-07-15 19:55:44.637076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.406 [2024-07-15 19:55:44.637115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:50.406 00:17:50.406 Latency(us) 00:17:50.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.406 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.406 nvme0n1 : 2.00 16924.13 66.11 0.00 0.00 7555.94 2383.13 32648.84 00:17:50.406 =================================================================================================================== 00:17:50.406 Total : 16924.13 66.11 0.00 0.00 7555.94 2383.13 32648.84 00:17:50.406 0 00:17:50.664 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:50.664 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:50.664 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:50.664 | .driver_specific 00:17:50.664 | .nvme_error 00:17:50.664 | .status_code 00:17:50.664 | .command_transient_transport_error' 00:17:50.664 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:50.664 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:17:50.664 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80669 00:17:50.664 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80669 ']' 00:17:50.664 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80669 00:17:50.664 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:50.922 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.922 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80669 00:17:50.922 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:50.922 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:50.922 killing process with pid 80669 00:17:50.922 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80669' 00:17:50.922 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80669 00:17:50.922 Received shutdown signal, test time was about 2.000000 seconds 00:17:50.922 00:17:50.922 Latency(us) 00:17:50.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.922 =================================================================================================================== 00:17:50.922 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.922 19:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80669 00:17:50.922 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:50.922 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:50.922 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:50.922 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:50.922 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:51.180 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80730 00:17:51.180 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:51.180 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80730 /var/tmp/bperf.sock 00:17:51.180 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80730 ']' 00:17:51.180 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:51.180 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.180 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:51.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:51.180 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.180 19:55:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:51.180 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:51.180 Zero copy mechanism will not be used. 00:17:51.180 [2024-07-15 19:55:45.224027] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:51.180 [2024-07-15 19:55:45.224153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80730 ] 00:17:51.180 [2024-07-15 19:55:45.362783] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.437 [2024-07-15 19:55:45.474259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.437 [2024-07-15 19:55:45.530338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:52.004 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.004 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:52.004 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:52.004 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:52.262 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:52.262 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.262 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:52.262 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.262 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.262 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.830 nvme0n1 00:17:52.830 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:52.830 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.830 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:52.830 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.830 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:52.830 19:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:52.830 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:52.830 Zero copy mechanism will not be used. 00:17:52.830 Running I/O for 2 seconds... 00:17:52.830 [2024-07-15 19:55:46.903935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.904288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.904337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.908870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.909254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.909332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.913861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.914193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.914248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.918834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.919153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.919181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.923595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.923942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.923987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.928368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.928699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.928737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.933333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.933676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.933707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.938349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.938665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.938695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.943140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.943506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.943542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.947913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.948245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.948287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.952705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.953066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.953100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.957637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.958009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.958042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.962528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.962853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.962884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.967492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.967850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.967902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.972456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.972803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.972838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.977487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.830 [2024-07-15 19:55:46.977850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.830 [2024-07-15 19:55:46.977899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.830 [2024-07-15 19:55:46.983052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:46.983422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:46.983455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:46.988433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:46.988789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:46.988820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:46.993735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:46.994071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:46.994101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:46.998836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:46.999161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:46.999192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.003994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.004328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.004352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.009227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.009628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.009679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.014457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.014802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.014834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.019502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.019866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.019900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.024359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.024663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.024693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.029148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.029518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.029559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.034076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.034407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.034434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.039170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.039542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.039578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.044194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.044581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.044618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.049310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.049645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.049673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.054263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.054649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.054699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.059447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.059838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.059870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.064525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.064910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.064969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.831 [2024-07-15 19:55:47.069410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:52.831 [2024-07-15 19:55:47.069738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.831 [2024-07-15 19:55:47.069768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.074289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.074611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.074635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.079035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.079392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.079420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.083814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.084142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.084171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.088573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.088901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.088931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.093454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.093760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.093790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.098191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.098544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.098586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.103013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.103359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.103393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.107747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.108077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.108106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.112653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.112988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.113034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.117477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.117791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.117830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.122292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.122607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.122637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.127087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.127455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.127492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.131934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.132264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.132303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.136832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.137165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.091 [2024-07-15 19:55:47.137196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.091 [2024-07-15 19:55:47.141722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.091 [2024-07-15 19:55:47.142037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.142068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.146429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.146757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.146785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.151199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.151552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.151587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.155946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.156280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.156330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.160808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.161147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.161178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.165735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.166051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.166082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.170481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.170809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.170839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.175175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.175563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.175600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.179942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.180272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.180310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.184795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.185122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.185150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.189829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.190147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.190181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.194992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.195329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.195369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.200439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.200751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.200783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.205875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.206202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.206246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.211339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.211688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.211727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.216731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.217066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.217099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.221988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.222333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.222357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.227048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.227411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.227450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.232175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.232533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.232563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.236977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.237312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.237357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.241858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.242184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.242207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.246563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.246887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.246911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.251409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.251766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.251794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.256137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.256483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.256510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.260930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.261277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.261318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.265682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.266010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.266040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.270537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.270865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.270894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.275466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.275792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.092 [2024-07-15 19:55:47.275849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.092 [2024-07-15 19:55:47.280540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.092 [2024-07-15 19:55:47.280869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.280900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.285719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.286050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.286073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.290854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.291185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.291216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.296070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.296409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.296438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.301286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.301680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.301712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.306400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.306729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.306758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.311584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.311967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.312003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.316648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.317010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.317044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.321810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.322140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.322177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.326604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.326926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.326957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.093 [2024-07-15 19:55:47.331439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.093 [2024-07-15 19:55:47.331768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.093 [2024-07-15 19:55:47.331808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.353 [2024-07-15 19:55:47.336204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.353 [2024-07-15 19:55:47.336561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.353 [2024-07-15 19:55:47.336596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.353 [2024-07-15 19:55:47.340971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.353 [2024-07-15 19:55:47.341294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.353 [2024-07-15 19:55:47.341341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.353 [2024-07-15 19:55:47.345873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.353 [2024-07-15 19:55:47.346181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.353 [2024-07-15 19:55:47.346212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.353 [2024-07-15 19:55:47.350621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.353 [2024-07-15 19:55:47.350951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.350982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.355379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.355728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.355767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.360086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.360429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.360469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.364926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.365277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.365320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.370514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.370855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.370879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.376022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.376398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.376431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.381408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.381743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.381786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.386192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.386533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.386567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.390961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.391311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.391349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.395757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.396072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.396102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.400458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.400789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.400817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.405277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.405636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.405670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.410069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.410412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.410435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.414936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.415265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.415347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.419752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.420100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.420142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.424595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.424928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.424983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.429484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.429813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.429852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.434257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.434599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.434636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.438944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.439287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.439317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.443715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.444055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.444093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.448435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.448762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.448801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.453401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.453724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.453753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.458169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.458504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.458541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.462990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.463339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.463367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.467812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.468131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.468158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.472631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.472970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.473001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.477388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.477735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.477768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.482137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.482472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.482499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.354 [2024-07-15 19:55:47.486934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.354 [2024-07-15 19:55:47.487268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.354 [2024-07-15 19:55:47.487349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.491786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.492115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.492155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.496488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.496820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.496852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.501251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.501630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.501662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.506098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.506432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.506454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.511017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.511347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.511378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.515898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.516227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.516257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.520761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.521092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.521119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.525599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.525905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.525942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.530572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.530890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.530913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.535381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.535726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.535755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.540075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.540418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.540445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.544885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.545266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.545329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.549714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.550035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.550059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.554491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.554815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.554838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.559222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.559578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.559611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.564023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.564345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.564389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.568974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.569293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.569338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.573788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.574101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.574131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.578460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.578792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.578819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.583321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.583646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.583681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.587937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.588268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.588306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.355 [2024-07-15 19:55:47.592797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.355 [2024-07-15 19:55:47.593143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.355 [2024-07-15 19:55:47.593171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.597508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.597846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.597875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.602283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.602617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.602640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.607050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.607381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.607407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.611774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.612095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.612119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.616606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.616926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.616979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.621424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.621768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.621805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.626202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.626548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.626582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.631046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.631379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.631416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.635831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.636162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.636185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.640484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.640812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.640847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.645224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.645582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.645614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.649907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.650240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.650285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.654674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.655008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.655043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.659455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.659801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.659832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.664218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.664565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.664600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.669087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.669419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.669452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.673895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.674215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.674244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.678703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.679017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.679046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.683542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.683864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.683894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.688337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.688658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.688688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.693563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.693902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.693929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.698420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.698768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.698802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.703356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.703716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.703756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.708420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.708782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.708814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.713554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.616 [2024-07-15 19:55:47.713902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.616 [2024-07-15 19:55:47.713933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.616 [2024-07-15 19:55:47.718874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.719196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.719219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.724195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.724553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.724586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.729777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.730165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.730198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.735450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.735785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.735821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.741123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.741467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.741497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.746818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.747248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.747294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.752389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.752782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.752815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.758084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.758437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.758471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.763735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.764078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.764114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.769573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.769927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.769960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.775054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.775366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.775394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.780440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.780781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.780813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.785820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.786185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.786221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.791318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.791648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.791681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.796910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.797245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.797291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.802252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.802629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.802662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.807488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.807814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.807856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.812915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.813247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.813297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.818247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.818565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.818605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.823737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.824090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.824122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.828933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.829244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.829282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.834115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.834429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.834466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.839341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.839694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.839727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.844641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.844947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.844975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.849847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.850164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.850200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.617 [2024-07-15 19:55:47.855042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.617 [2024-07-15 19:55:47.855355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.617 [2024-07-15 19:55:47.855383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.860373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.860675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.860703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.865613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.865913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.865945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.870842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.871156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.871198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.876020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.876337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.876366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.881184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.881496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.881525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.886385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.886686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.886728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.891814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.892115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.892145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.896964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.897260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.897301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.902144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.902451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.902480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.907397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.907700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.907724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.912714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.913024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.913057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.918045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.918400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.918439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.923379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.923677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.923706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.928685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.928992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.929022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.934010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.934367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.934412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.939363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.939662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.939686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.944671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.944995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.945026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.950131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.950464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.950489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.955590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.955889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.955916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.961089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.961402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.961436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.966494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.966796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.966821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.971870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.972230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.972277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.977262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.977578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.977610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.982786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.983087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.983119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.878 [2024-07-15 19:55:47.988140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.878 [2024-07-15 19:55:47.988499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.878 [2024-07-15 19:55:47.988532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:47.993833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:47.994170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:47.994203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:47.999391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:47.999689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:47.999719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.005119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.005431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.005456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.010821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.011175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.011209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.016356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.016677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.016713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.021844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.022201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.022238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.027325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.027703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.027735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.032754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.033119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.033151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.038174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.038508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.038555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.043420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.043720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.043744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.048867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.049176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.049203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.054154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.054465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.054493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.059732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.060028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.060060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.065328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.065672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.065705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.070833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.071142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.071173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.076090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.076414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.076451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.081332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.081634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.081665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.086779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.087121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.087157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.092293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.092662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.092697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.097998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.098327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.098351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.103562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.103933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.103970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.109131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.109448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.109473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.114459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.114773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.114805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.879 [2024-07-15 19:55:48.119858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:53.879 [2024-07-15 19:55:48.120217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.879 [2024-07-15 19:55:48.120249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.125643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.125945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.125979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.131010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.131309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.131350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.136318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.136619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.136649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.142000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.142346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.142371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.147403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.147784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.147820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.152873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.153188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.153217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.158447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.158765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.158800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.164004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.164352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.164378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.169406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.169710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.169738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.174981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.175311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.175357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.180566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.180910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.180953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.186164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.186521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.186557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.191759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.192105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.192138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.197423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.197719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.197743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.202901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.203230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.203282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.208518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.208860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.208897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.213814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.214110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.214149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.219090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.219437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.219469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.224319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.224611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.224640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.229612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.229900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.229922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.234853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.235202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.140 [2024-07-15 19:55:48.235236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.140 [2024-07-15 19:55:48.240339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.140 [2024-07-15 19:55:48.240667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.240699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.245766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.246079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.246115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.251253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.251601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.251645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.256879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.257228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.257276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.262261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.262600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.262634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.267716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.268020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.268049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.273347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.273705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.273737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.279052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.279401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.279432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.284581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.284924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.284970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.290096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.290463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.290495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.295499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.295829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.295872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.301177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.301518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.301554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.306466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.306832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.306864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.311755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.312055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.312089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.316810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.317136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.317179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.322059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.322387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.322422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.327469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.327799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.327832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.333126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.333476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.333504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.338720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.339020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.339049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.344290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.344618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.344651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.349809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.350126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.350155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.355459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.355833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.355869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.360914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.361235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.361280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.366199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.366593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.366629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.371659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.372041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.372077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.377049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.377369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.377396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.141 [2024-07-15 19:55:48.382381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.141 [2024-07-15 19:55:48.382728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.141 [2024-07-15 19:55:48.382761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.387926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.388251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.388276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.393466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.393839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.393874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.399120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.399442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.399469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.404438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.404746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.404778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.410040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.410354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.410379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.415598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.415911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.415950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.421030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.421344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.421369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.426290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.426611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.426647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.431749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.432049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.432078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.437511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.437833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.437864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.442985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.443316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.443341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.448580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.448920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.448962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.454024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.454390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.401 [2024-07-15 19:55:48.454418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.401 [2024-07-15 19:55:48.459534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.401 [2024-07-15 19:55:48.459881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.459914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.465037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.465353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.465379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.470566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.470866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.470897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.475795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.476137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.476174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.481340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.481690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.481725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.486858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.487172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.487215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.492476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.492800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.492836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.498038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.498339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.498377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.503273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.503592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.503626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.508691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.509001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.509026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.513942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.514314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.514364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.519025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.519346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.519396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.523794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.524129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.524160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.528640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.528992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.529022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.533597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.533919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.533949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.538416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.538789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.538840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.543305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.543651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.543682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.548068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.548416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.548443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.552964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.553364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.553401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.557917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.558257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.558317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.562781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.563145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.563180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.567706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.568041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.568075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.572536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.572855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.572884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.577366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.577701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.577736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.582107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.582471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.582499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.586988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.587324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.587362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.591808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.592132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.592160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.596577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.596906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.596945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.601522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.601853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.601881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.402 [2024-07-15 19:55:48.606463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.402 [2024-07-15 19:55:48.606826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.402 [2024-07-15 19:55:48.606871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.403 [2024-07-15 19:55:48.611659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.403 [2024-07-15 19:55:48.612015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.403 [2024-07-15 19:55:48.612044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.403 [2024-07-15 19:55:48.616760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.403 [2024-07-15 19:55:48.617104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.403 [2024-07-15 19:55:48.617132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.403 [2024-07-15 19:55:48.621897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.403 [2024-07-15 19:55:48.622233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.403 [2024-07-15 19:55:48.622289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.403 [2024-07-15 19:55:48.626885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.403 [2024-07-15 19:55:48.627238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.403 [2024-07-15 19:55:48.627291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.403 [2024-07-15 19:55:48.632378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.403 [2024-07-15 19:55:48.632750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.403 [2024-07-15 19:55:48.632783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.403 [2024-07-15 19:55:48.637805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.403 [2024-07-15 19:55:48.638136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.403 [2024-07-15 19:55:48.638165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.403 [2024-07-15 19:55:48.643089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.403 [2024-07-15 19:55:48.643433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.403 [2024-07-15 19:55:48.643460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.648197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.648554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.648596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.653186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.653557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.653592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.658449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.658842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.658877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.663603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.663948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.663976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.668651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.669025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.669062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.673692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.674021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.674051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.678772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.679091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.679114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.683651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.683976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.684005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.688589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.688910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.688947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.693499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.693840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.693870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.698337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.698678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.698712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.703092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.703455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.703487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.707986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.708337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.708386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.713650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.713977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.714019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.719230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.719553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.719588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.724452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.724826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.724860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.729978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.730314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.730352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.735023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.735360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.735383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.739994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.740333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.740372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.745592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.745938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.745973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.750494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.750867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.750901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.755534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.755899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.755942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.760554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.760892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.663 [2024-07-15 19:55:48.760919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.663 [2024-07-15 19:55:48.765561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.663 [2024-07-15 19:55:48.765883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.765913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.770430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.770801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.770833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.775652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.775959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.775986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.780498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.780819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.780848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.785469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.785793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.785815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.790249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.790608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.790642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.795573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.795921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.795954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.800400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.800726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.800757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.805241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.805606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.805640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.810093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.810432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.810458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.815270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.815625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.815660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.820095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.820438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.820461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.824820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.825176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.825212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.829615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.829940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.829974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.834845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.835177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.835207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.839642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.839986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.840014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.844436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.844766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.844804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.849545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.849891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.849924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.854503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.854834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.854863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.859249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.859605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.859634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.864128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.864488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.864516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.869575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.869915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.869944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.874381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.874709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.874738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.879200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.879563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.879597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.884070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.884402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.884428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.889411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.889774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.889809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.664 [2024-07-15 19:55:48.894257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25f0970) with pdu=0x2000190fef90 00:17:54.664 [2024-07-15 19:55:48.894602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.664 [2024-07-15 19:55:48.894637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.664 00:17:54.664 Latency(us) 00:17:54.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.664 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:54.664 nvme0n1 : 2.00 6034.22 754.28 0.00 0.00 2645.53 1906.50 6464.23 00:17:54.664 =================================================================================================================== 00:17:54.664 Total : 6034.22 754.28 0.00 0.00 2645.53 1906.50 6464.23 00:17:54.664 0 00:17:54.923 19:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:54.923 19:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:54.923 19:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:54.923 19:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:54.923 | .driver_specific 00:17:54.923 | .nvme_error 00:17:54.923 | .status_code 00:17:54.923 | .command_transient_transport_error' 00:17:54.923 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 389 > 0 )) 00:17:54.923 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80730 00:17:54.923 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80730 ']' 00:17:54.923 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80730 00:17:54.923 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:54.923 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.923 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80730 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:55.180 killing process with pid 80730 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80730' 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80730 00:17:55.180 Received shutdown signal, test time was about 2.000000 seconds 00:17:55.180 00:17:55.180 Latency(us) 00:17:55.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.180 =================================================================================================================== 00:17:55.180 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80730 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80522 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80522 ']' 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80522 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80522 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:55.180 killing process with pid 80522 00:17:55.180 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80522' 00:17:55.181 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80522 00:17:55.181 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80522 00:17:55.439 00:17:55.439 real 0m18.084s 00:17:55.439 user 0m34.626s 00:17:55.439 sys 0m4.840s 00:17:55.439 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.439 19:55:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:55.439 ************************************ 00:17:55.439 END TEST nvmf_digest_error 00:17:55.440 ************************************ 00:17:55.440 19:55:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:55.440 19:55:49 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:55.440 19:55:49 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:55.440 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.440 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.698 rmmod nvme_tcp 00:17:55.698 rmmod nvme_fabrics 00:17:55.698 rmmod nvme_keyring 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80522 ']' 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80522 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80522 ']' 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80522 00:17:55.698 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80522) - No such process 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80522 is not found' 00:17:55.698 Process with pid 80522 is not found 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:55.698 00:17:55.698 real 0m37.221s 00:17:55.698 user 1m10.142s 00:17:55.698 sys 0m9.873s 00:17:55.698 ************************************ 00:17:55.698 END TEST nvmf_digest 00:17:55.698 ************************************ 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.698 19:55:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:55.698 19:55:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.698 19:55:49 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:17:55.698 19:55:49 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:17:55.698 19:55:49 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:55.698 19:55:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.698 19:55:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.698 19:55:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.698 ************************************ 00:17:55.698 START TEST nvmf_host_multipath 00:17:55.698 ************************************ 00:17:55.698 19:55:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:55.957 * Looking for test storage... 00:17:55.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:55.957 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:55.958 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:55.958 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:55.958 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:55.958 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.958 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:55.958 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:55.958 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:55.958 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:55.958 19:55:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:55.958 Cannot find device "nvmf_tgt_br" 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.958 Cannot find device "nvmf_tgt_br2" 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:55.958 Cannot find device "nvmf_tgt_br" 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:55.958 Cannot find device "nvmf_tgt_br2" 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:55.958 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:56.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:56.236 00:17:56.236 --- 10.0.0.2 ping statistics --- 00:17:56.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.236 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:56.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:56.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:56.236 00:17:56.236 --- 10.0.0.3 ping statistics --- 00:17:56.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.236 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:56.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:56.236 00:17:56.236 --- 10.0.0.1 ping statistics --- 00:17:56.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.236 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:56.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80999 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80999 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80999 ']' 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.236 19:55:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:56.236 [2024-07-15 19:55:50.366732] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:17:56.236 [2024-07-15 19:55:50.367529] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.494 [2024-07-15 19:55:50.502254] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:56.494 [2024-07-15 19:55:50.587854] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.494 [2024-07-15 19:55:50.588176] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.494 [2024-07-15 19:55:50.588356] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.494 [2024-07-15 19:55:50.588577] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.494 [2024-07-15 19:55:50.588625] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.494 [2024-07-15 19:55:50.588825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.494 [2024-07-15 19:55:50.588834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.494 [2024-07-15 19:55:50.641171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:57.425 19:55:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.425 19:55:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:57.425 19:55:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.425 19:55:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:57.425 19:55:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:57.425 19:55:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.425 19:55:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80999 00:17:57.425 19:55:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:57.425 [2024-07-15 19:55:51.560305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.425 19:55:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:57.683 Malloc0 00:17:57.683 19:55:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:57.941 19:55:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.200 19:55:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.458 [2024-07-15 19:55:52.540827] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.458 19:55:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:58.716 [2024-07-15 19:55:52.797050] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:58.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81050 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81050 /var/tmp/bdevperf.sock 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81050 ']' 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.716 19:55:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:58.973 19:55:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.973 19:55:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:58.973 19:55:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:59.230 19:55:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:59.489 Nvme0n1 00:17:59.489 19:55:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:59.747 Nvme0n1 00:18:00.005 19:55:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:00.005 19:55:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:00.941 19:55:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:00.941 19:55:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:01.227 19:55:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:01.486 19:55:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:01.486 19:55:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81088 00:18:01.486 19:55:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:01.486 19:55:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.044 Attaching 4 probes... 00:18:08.044 @path[10.0.0.2, 4421]: 18488 00:18:08.044 @path[10.0.0.2, 4421]: 18870 00:18:08.044 @path[10.0.0.2, 4421]: 19495 00:18:08.044 @path[10.0.0.2, 4421]: 19200 00:18:08.044 @path[10.0.0.2, 4421]: 17947 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81088 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:08.044 19:56:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:08.044 19:56:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:08.302 19:56:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:08.302 19:56:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:08.302 19:56:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81200 00:18:08.302 19:56:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.871 Attaching 4 probes... 00:18:14.871 @path[10.0.0.2, 4420]: 17648 00:18:14.871 @path[10.0.0.2, 4420]: 17815 00:18:14.871 @path[10.0.0.2, 4420]: 17849 00:18:14.871 @path[10.0.0.2, 4420]: 18092 00:18:14.871 @path[10.0.0.2, 4420]: 17768 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81200 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:14.871 19:56:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:15.130 19:56:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:15.130 19:56:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:15.130 19:56:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81313 00:18:15.130 19:56:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:21.688 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:21.688 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:21.688 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:21.688 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.688 Attaching 4 probes... 00:18:21.688 @path[10.0.0.2, 4421]: 14421 00:18:21.688 @path[10.0.0.2, 4421]: 17482 00:18:21.688 @path[10.0.0.2, 4421]: 19204 00:18:21.688 @path[10.0.0.2, 4421]: 19852 00:18:21.688 @path[10.0.0.2, 4421]: 18777 00:18:21.688 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:21.688 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:21.688 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:21.688 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:21.688 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.689 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.689 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81313 00:18:21.689 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.689 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:21.689 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:21.689 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:21.948 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:21.948 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81431 00:18:21.948 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:21.948 19:56:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:28.512 19:56:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:28.512 19:56:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.512 Attaching 4 probes... 00:18:28.512 00:18:28.512 00:18:28.512 00:18:28.512 00:18:28.512 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81431 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:28.512 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:28.786 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:28.786 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:28.786 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81542 00:18:28.786 19:56:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:35.360 19:56:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:35.360 19:56:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.360 Attaching 4 probes... 00:18:35.360 @path[10.0.0.2, 4421]: 18503 00:18:35.360 @path[10.0.0.2, 4421]: 19222 00:18:35.360 @path[10.0.0.2, 4421]: 18710 00:18:35.360 @path[10.0.0.2, 4421]: 17982 00:18:35.360 @path[10.0.0.2, 4421]: 18712 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81542 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:35.360 19:56:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:36.295 19:56:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:36.295 19:56:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81667 00:18:36.295 19:56:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:36.295 19:56:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.861 Attaching 4 probes... 00:18:42.861 @path[10.0.0.2, 4420]: 17596 00:18:42.861 @path[10.0.0.2, 4420]: 17795 00:18:42.861 @path[10.0.0.2, 4420]: 17521 00:18:42.861 @path[10.0.0.2, 4420]: 17337 00:18:42.861 @path[10.0.0.2, 4420]: 17360 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81667 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:42.861 [2024-07-15 19:56:36.856473] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:42.861 19:56:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:43.119 19:56:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:49.750 19:56:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:49.750 19:56:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81837 00:18:49.750 19:56:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:49.750 19:56:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:55.023 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:55.023 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.282 Attaching 4 probes... 00:18:55.282 @path[10.0.0.2, 4421]: 16833 00:18:55.282 @path[10.0.0.2, 4421]: 17355 00:18:55.282 @path[10.0.0.2, 4421]: 17120 00:18:55.282 @path[10.0.0.2, 4421]: 17123 00:18:55.282 @path[10.0.0.2, 4421]: 17091 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81837 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81050 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81050 ']' 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81050 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81050 00:18:55.282 killing process with pid 81050 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81050' 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81050 00:18:55.282 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81050 00:18:55.550 Connection closed with partial response: 00:18:55.550 00:18:55.551 00:18:55.551 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81050 00:18:55.551 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:55.551 [2024-07-15 19:55:52.860248] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:55.551 [2024-07-15 19:55:52.860357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81050 ] 00:18:55.551 [2024-07-15 19:55:52.995046] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.551 [2024-07-15 19:55:53.079358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.551 [2024-07-15 19:55:53.131658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:55.551 Running I/O for 90 seconds... 00:18:55.551 [2024-07-15 19:56:02.301610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.301679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.301754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.301775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.301799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.301815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.301851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.301881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.301917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.301931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.301951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.301966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.301985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.551 [2024-07-15 19:56:02.302383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.551 [2024-07-15 19:56:02.302420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.551 [2024-07-15 19:56:02.302454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.551 [2024-07-15 19:56:02.302488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.551 [2024-07-15 19:56:02.302555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.551 [2024-07-15 19:56:02.302591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.551 [2024-07-15 19:56:02.302635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.551 [2024-07-15 19:56:02.302673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:55.551 [2024-07-15 19:56:02.302844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.551 [2024-07-15 19:56:02.302860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.302897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.302928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.302963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.302978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.302998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.552 [2024-07-15 19:56:02.303348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.552 [2024-07-15 19:56:02.303386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.552 [2024-07-15 19:56:02.303421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.552 [2024-07-15 19:56:02.303456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.552 [2024-07-15 19:56:02.303491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.552 [2024-07-15 19:56:02.303542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.552 [2024-07-15 19:56:02.303578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.552 [2024-07-15 19:56:02.303623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.303968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.303988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.304003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.304023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.304038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.304066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.552 [2024-07-15 19:56:02.304081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:55.552 [2024-07-15 19:56:02.304101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.304116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.304151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.304186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.304221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.553 [2024-07-15 19:56:02.304912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.304936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.304980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.553 [2024-07-15 19:56:02.305445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:55.553 [2024-07-15 19:56:02.305466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.305489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.305557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.305594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.305631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.305676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.305713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.305750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.305786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.305823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.305866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.305918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.305954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.305989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.306004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.306023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.306038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.306058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.306073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.306093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.306107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.306134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.306156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.306177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.306191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.306212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.306227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.307738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.554 [2024-07-15 19:56:02.307769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.307797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.307815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.307837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.307853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.307889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.307904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.307940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.307955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.307975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.307990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.308016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.308032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.308053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.308068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.308217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.308241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.308264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.308328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.308367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.308384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.308406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.308421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.308442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.308458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:55.554 [2024-07-15 19:56:02.308479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.554 [2024-07-15 19:56:02.308500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:02.308522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:02.308544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:02.308566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:02.308582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:02.308607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:02.308624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.877813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.877887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.877946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.877967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.877990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.555 [2024-07-15 19:56:08.878558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.555 [2024-07-15 19:56:08.878611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.555 [2024-07-15 19:56:08.878648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.555 [2024-07-15 19:56:08.878684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.555 [2024-07-15 19:56:08.878721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.555 [2024-07-15 19:56:08.878758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.555 [2024-07-15 19:56:08.878794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.555 [2024-07-15 19:56:08.878831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.878965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.878986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.879002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:55.555 [2024-07-15 19:56:08.879023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.555 [2024-07-15 19:56:08.879038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.556 [2024-07-15 19:56:08.879506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.556 [2024-07-15 19:56:08.879551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.556 [2024-07-15 19:56:08.879590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.556 [2024-07-15 19:56:08.879627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.556 [2024-07-15 19:56:08.879664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.556 [2024-07-15 19:56:08.879701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.556 [2024-07-15 19:56:08.879737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.556 [2024-07-15 19:56:08.879774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.879972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:55.556 [2024-07-15 19:56:08.879993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.556 [2024-07-15 19:56:08.880015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.880399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.880964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.880982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.881004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.881019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.881049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.881064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.881087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.557 [2024-07-15 19:56:08.881102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.881128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.881144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.881167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.881183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.881204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.881219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.881241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.881256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.881304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.881320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:55.557 [2024-07-15 19:56:08.881342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.557 [2024-07-15 19:56:08.881357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.881742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.881779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.881816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.881853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.881890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.881927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.881964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.881985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.882018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.882042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.882057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.882079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.882094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.882116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.882131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.882152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.882167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.882188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.882204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.882226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.882241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.882273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.882290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.882312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.882328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.883089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.558 [2024-07-15 19:56:08.883116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.883152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.883169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.883199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.883215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.883245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.883273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.883318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.883335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.883365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.883381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.883411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.883427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.883457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.558 [2024-07-15 19:56:08.883473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:55.558 [2024-07-15 19:56:08.883519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:08.883539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:08.883570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:08.883586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:08.883616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:08.883640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:08.883670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:08.883686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:08.883715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:08.883731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:08.883760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:08.883776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:08.883806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:08.883822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:08.883852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:08.883867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:08.883907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:08.883924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.929971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.929990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.559 [2024-07-15 19:56:15.930003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.559 [2024-07-15 19:56:15.930036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.559 [2024-07-15 19:56:15.930076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.559 [2024-07-15 19:56:15.930109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.559 [2024-07-15 19:56:15.930141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.559 [2024-07-15 19:56:15.930173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.559 [2024-07-15 19:56:15.930206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.559 [2024-07-15 19:56:15.930238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.559 [2024-07-15 19:56:15.930296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.559 [2024-07-15 19:56:15.930348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:55.559 [2024-07-15 19:56:15.930368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.930627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.930663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.930697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.930731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.930781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.930816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.930850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.930884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.930971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.930985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.560 [2024-07-15 19:56:15.931451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.931485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.931520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.560 [2024-07-15 19:56:15.931553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:55.560 [2024-07-15 19:56:15.931573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.931587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.931629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.931664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.931698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.931733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.931767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.931802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.931836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.931870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.931904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.931938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.931972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.931992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.932006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.932046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.932086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.932120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.932153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.932188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.932221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.932255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.561 [2024-07-15 19:56:15.932302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.932341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.932377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.932414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.932448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.932482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.932525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.932559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.561 [2024-07-15 19:56:15.932593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:55.561 [2024-07-15 19:56:15.932613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.932627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.932661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.932695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.932729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.932763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.932798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.932831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.932865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.932899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.932942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.932971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.932988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.933441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.933457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.562 [2024-07-15 19:56:15.934200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.934248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.934300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.934357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.934400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.934441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.934483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.934525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.934588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.562 [2024-07-15 19:56:15.934631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:55.562 [2024-07-15 19:56:15.934660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:15.934685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:15.934714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:15.934729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:15.934756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:15.934771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:15.934798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:15.934819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:15.934846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:15.934861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:15.934888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:15.934903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:15.934934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:15.934950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.563 [2024-07-15 19:56:29.253801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.563 [2024-07-15 19:56:29.253833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.563 [2024-07-15 19:56:29.253866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.563 [2024-07-15 19:56:29.253909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.563 [2024-07-15 19:56:29.253943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.563 [2024-07-15 19:56:29.253975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.253996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.563 [2024-07-15 19:56:29.254009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.254028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.563 [2024-07-15 19:56:29.254042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.254079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.563 [2024-07-15 19:56:29.254092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:55.563 [2024-07-15 19:56:29.254112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.563 [2024-07-15 19:56:29.254126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.564 [2024-07-15 19:56:29.254856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.254975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.254987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.255001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.255013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.255026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.255039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.255052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.255071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.255086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.255098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.564 [2024-07-15 19:56:29.255112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.564 [2024-07-15 19:56:29.255124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.565 [2024-07-15 19:56:29.255940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.255979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.255992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.256005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.256018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.256031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.256044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.565 [2024-07-15 19:56:29.256057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.565 [2024-07-15 19:56:29.256070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.566 [2024-07-15 19:56:29.256103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.566 [2024-07-15 19:56:29.256130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.566 [2024-07-15 19:56:29.256156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.566 00:18:55.566 [2024-07-15 19:56:29.256454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.566 [2024-07-15 19:56:29.256601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f80b0 is same with the state(5) to be set 00:18:55.566 [2024-07-15 19:56:29.256630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.566 [2024-07-15 19:56:29.256641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.566 [2024-07-15 19:56:29.256651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16552 len:8 PRP1 0x0 PRP2 0x0 00:18:55.566 [2024-07-15 19:56:29.256663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.566 [2024-07-15 19:56:29.256701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.566 [2024-07-15 19:56:29.256710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16944 len:8 PRP1 0x0 PRP2 0x0 00:18:55.566 [2024-07-15 19:56:29.256722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.566 [2024-07-15 19:56:29.256743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.566 [2024-07-15 19:56:29.256752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16952 len:8 PRP1 0x0 PRP2 0x0 00:18:55.566 [2024-07-15 19:56:29.256764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.566 [2024-07-15 19:56:29.256785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.566 [2024-07-15 19:56:29.256794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:8 PRP1 0x0 PRP2 0x0 00:18:55.566 [2024-07-15 19:56:29.256806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.566 [2024-07-15 19:56:29.256818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.566 [2024-07-15 19:56:29.256833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.566 [2024-07-15 19:56:29.256843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16968 len:8 PRP1 0x0 PRP2 0x0 00:18:55.566 [2024-07-15 19:56:29.256855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.256868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.256877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.256886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16976 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.256898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.256910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.256919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.256933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16984 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.256946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.256958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.256968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.257057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17000 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.257100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17008 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.257144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17016 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.257194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.257244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17032 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.257316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17040 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.257358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17048 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.257405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.567 [2024-07-15 19:56:29.257446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.567 [2024-07-15 19:56:29.257456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17064 len:8 PRP1 0x0 PRP2 0x0 00:18:55.567 [2024-07-15 19:56:29.257468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.257521] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7f80b0 was disconnected and freed. reset controller. 00:18:55.567 [2024-07-15 19:56:29.258552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:55.567 [2024-07-15 19:56:29.258625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.567 [2024-07-15 19:56:29.258646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.567 [2024-07-15 19:56:29.258675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f9a70 (9): Bad file descriptor 00:18:55.567 [2024-07-15 19:56:29.259029] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.567 [2024-07-15 19:56:29.259058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9a70 with addr=10.0.0.2, port=4421 00:18:55.567 [2024-07-15 19:56:29.259074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9a70 is same with the state(5) to be set 00:18:55.567 [2024-07-15 19:56:29.259109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f9a70 (9): Bad file descriptor 00:18:55.567 [2024-07-15 19:56:29.259143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.567 [2024-07-15 19:56:29.259163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:55.567 [2024-07-15 19:56:29.259177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.567 [2024-07-15 19:56:29.259206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:55.567 [2024-07-15 19:56:29.259222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:55.567 [2024-07-15 19:56:39.325466] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:55.567 Received shutdown signal, test time was about 55.291300 seconds 00:18:55.567 00:18:55.567 Latency(us) 00:18:55.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.567 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:55.567 Verification LBA range: start 0x0 length 0x4000 00:18:55.567 Nvme0n1 : 55.29 7647.95 29.87 0.00 0.00 16703.60 1213.91 7046430.72 00:18:55.567 =================================================================================================================== 00:18:55.567 Total : 7647.95 29.87 0.00 0.00 16703.60 1213.91 7046430.72 00:18:55.863 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:55.863 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:55.863 19:56:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:55.863 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.863 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:55.863 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:55.863 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:55.864 rmmod nvme_tcp 00:18:55.864 rmmod nvme_fabrics 00:18:55.864 rmmod nvme_keyring 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80999 ']' 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80999 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80999 ']' 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80999 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80999 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80999' 00:18:55.864 killing process with pid 80999 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80999 00:18:55.864 19:56:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80999 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:56.122 ************************************ 00:18:56.122 END TEST nvmf_host_multipath 00:18:56.122 ************************************ 00:18:56.122 00:18:56.122 real 1m0.400s 00:18:56.122 user 2m47.204s 00:18:56.122 sys 0m18.404s 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:56.122 19:56:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:56.122 19:56:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:56.122 19:56:50 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:56.122 19:56:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:56.122 19:56:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.122 19:56:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:56.122 ************************************ 00:18:56.122 START TEST nvmf_timeout 00:18:56.122 ************************************ 00:18:56.122 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:56.381 * Looking for test storage... 00:18:56.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:56.381 Cannot find device "nvmf_tgt_br" 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:56.381 Cannot find device "nvmf_tgt_br2" 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:56.381 Cannot find device "nvmf_tgt_br" 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:56.381 Cannot find device "nvmf_tgt_br2" 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:56.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:56.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:56.381 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:56.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:18:56.640 00:18:56.640 --- 10.0.0.2 ping statistics --- 00:18:56.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.640 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:56.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:56.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:18:56.640 00:18:56.640 --- 10.0.0.3 ping statistics --- 00:18:56.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.640 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:56.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:56.640 00:18:56.640 --- 10.0.0.1 ping statistics --- 00:18:56.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.640 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82148 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82148 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82148 ']' 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.640 19:56:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:56.640 [2024-07-15 19:56:50.840643] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:56.640 [2024-07-15 19:56:50.840736] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.898 [2024-07-15 19:56:50.979561] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:56.898 [2024-07-15 19:56:51.106493] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.898 [2024-07-15 19:56:51.106564] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.898 [2024-07-15 19:56:51.106578] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.898 [2024-07-15 19:56:51.106589] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.898 [2024-07-15 19:56:51.106599] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.898 [2024-07-15 19:56:51.106762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.898 [2024-07-15 19:56:51.106778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.156 [2024-07-15 19:56:51.164069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:57.722 19:56:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.722 19:56:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:57.722 19:56:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.722 19:56:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.722 19:56:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:57.722 19:56:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.722 19:56:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:57.722 19:56:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:57.980 [2024-07-15 19:56:52.136087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.980 19:56:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:58.238 Malloc0 00:18:58.239 19:56:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:58.497 19:56:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.756 19:56:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.015 [2024-07-15 19:56:53.129770] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.015 19:56:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82197 00:18:59.015 19:56:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:59.015 19:56:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82197 /var/tmp/bdevperf.sock 00:18:59.015 19:56:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82197 ']' 00:18:59.015 19:56:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.015 19:56:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.015 19:56:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.015 19:56:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.015 19:56:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:59.015 [2024-07-15 19:56:53.205563] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:59.015 [2024-07-15 19:56:53.205683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82197 ] 00:18:59.272 [2024-07-15 19:56:53.347763] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.272 [2024-07-15 19:56:53.468242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.531 [2024-07-15 19:56:53.523344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:00.097 19:56:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.097 19:56:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:00.097 19:56:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:00.097 19:56:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:00.663 NVMe0n1 00:19:00.663 19:56:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82221 00:19:00.663 19:56:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.663 19:56:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:00.663 Running I/O for 10 seconds... 00:19:01.599 19:56:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.860 [2024-07-15 19:56:55.849758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.849991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.850000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.850008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.850017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.850025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.860 [2024-07-15 19:56:55.850034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.861 [2024-07-15 19:56:55.850103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.861 [2024-07-15 19:56:55.850122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.861 [2024-07-15 19:56:55.850132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.861 [2024-07-15 19:56:55.850141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with t[2024-07-15 19:56:55.850151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:19:01.861 id:0 cdw10:00000000 cdw11:00000000 00:19:01.861 [2024-07-15 19:56:55.850160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.861 [2024-07-15 19:56:55.850169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.861 [2024-07-15 19:56:55.850178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.861 [2024-07-15 19:56:55.850187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eee0 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.861 [2024-07-15 19:56:55.850660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbdf70 is same with the state(5) to be set 00:19:01.862 [2024-07-15 19:56:55.850818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.850836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.850856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.850867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.850879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.850888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.850900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.850909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.850920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.850930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.850942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.850951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.850963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.850972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.850984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.862 [2024-07-15 19:56:55.851532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.862 [2024-07-15 19:56:55.851542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.851979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.851989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.863 [2024-07-15 19:56:55.852329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.863 [2024-07-15 19:56:55.852341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.852971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.852982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.853001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.853015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.853024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.853040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.853050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.853062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.853071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.864 [2024-07-15 19:56:55.853083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.864 [2024-07-15 19:56:55.853093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.865 [2024-07-15 19:56:55.853653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.865 [2024-07-15 19:56:55.853664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.865 [2024-07-15 19:56:55.853673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.866 [2024-07-15 19:56:55.853684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf9a0 is same with the state(5) to be set 00:19:01.866 [2024-07-15 19:56:55.853695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:01.866 [2024-07-15 19:56:55.853703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:01.866 [2024-07-15 19:56:55.853712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:19:01.866 [2024-07-15 19:56:55.853720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.866 [2024-07-15 19:56:55.853772] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcaf9a0 was disconnected and freed. reset controller. 00:19:01.866 [2024-07-15 19:56:55.854032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:01.866 [2024-07-15 19:56:55.854055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5eee0 (9): Bad file descriptor 00:19:01.866 [2024-07-15 19:56:55.854160] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:01.866 [2024-07-15 19:56:55.854181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5eee0 with addr=10.0.0.2, port=4420 00:19:01.866 [2024-07-15 19:56:55.854192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eee0 is same with the state(5) to be set 00:19:01.866 [2024-07-15 19:56:55.854211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5eee0 (9): Bad file descriptor 00:19:01.866 [2024-07-15 19:56:55.854227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:01.866 [2024-07-15 19:56:55.854237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:01.866 [2024-07-15 19:56:55.854248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:01.866 [2024-07-15 19:56:55.854280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:01.866 [2024-07-15 19:56:55.854293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:01.866 19:56:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:03.771 [2024-07-15 19:56:57.871182] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.771 [2024-07-15 19:56:57.871247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5eee0 with addr=10.0.0.2, port=4420 00:19:03.771 [2024-07-15 19:56:57.871274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eee0 is same with the state(5) to be set 00:19:03.771 [2024-07-15 19:56:57.871304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5eee0 (9): Bad file descriptor 00:19:03.771 [2024-07-15 19:56:57.871325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:03.771 [2024-07-15 19:56:57.871335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:03.771 [2024-07-15 19:56:57.871346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:03.771 [2024-07-15 19:56:57.871427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.771 [2024-07-15 19:56:57.871441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:03.771 19:56:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:03.771 19:56:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.771 19:56:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:04.029 19:56:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:04.029 19:56:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:04.029 19:56:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:04.029 19:56:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:04.288 19:56:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:04.288 19:56:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:05.663 [2024-07-15 19:56:59.871625] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.663 [2024-07-15 19:56:59.871701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5eee0 with addr=10.0.0.2, port=4420 00:19:05.663 [2024-07-15 19:56:59.871718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eee0 is same with the state(5) to be set 00:19:05.663 [2024-07-15 19:56:59.871747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5eee0 (9): Bad file descriptor 00:19:05.663 [2024-07-15 19:56:59.871767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:05.663 [2024-07-15 19:56:59.871777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:05.663 [2024-07-15 19:56:59.871789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:05.663 [2024-07-15 19:56:59.871818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:05.663 [2024-07-15 19:56:59.871830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.196 [2024-07-15 19:57:01.871889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:08.196 [2024-07-15 19:57:01.871986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:08.196 [2024-07-15 19:57:01.871999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:08.196 [2024-07-15 19:57:01.872011] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:08.196 [2024-07-15 19:57:01.872048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:08.762 00:19:08.763 Latency(us) 00:19:08.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.763 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.763 Verification LBA range: start 0x0 length 0x4000 00:19:08.763 NVMe0n1 : 8.14 948.47 3.70 15.73 0.00 132652.38 4021.53 7046430.72 00:19:08.763 =================================================================================================================== 00:19:08.763 Total : 948.47 3.70 15.73 0.00 132652.38 4021.53 7046430.72 00:19:08.763 0 00:19:09.329 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:09.329 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:09.329 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:09.588 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:09.588 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:09.588 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:09.588 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82221 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82197 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82197 ']' 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82197 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82197 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:09.846 killing process with pid 82197 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82197' 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82197 00:19:09.846 Received shutdown signal, test time was about 9.277372 seconds 00:19:09.846 00:19:09.846 Latency(us) 00:19:09.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.846 =================================================================================================================== 00:19:09.846 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.846 19:57:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82197 00:19:10.105 19:57:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.365 [2024-07-15 19:57:04.422254] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.365 19:57:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82337 00:19:10.365 19:57:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:10.365 19:57:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82337 /var/tmp/bdevperf.sock 00:19:10.365 19:57:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82337 ']' 00:19:10.365 19:57:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.365 19:57:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.365 19:57:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.365 19:57:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.365 19:57:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:10.365 [2024-07-15 19:57:04.498192] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:10.365 [2024-07-15 19:57:04.498310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82337 ] 00:19:10.623 [2024-07-15 19:57:04.641959] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.623 [2024-07-15 19:57:04.751094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.623 [2024-07-15 19:57:04.803148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:11.560 19:57:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.560 19:57:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:11.560 19:57:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:11.560 19:57:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:11.823 NVMe0n1 00:19:11.823 19:57:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.823 19:57:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82361 00:19:11.823 19:57:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:12.081 Running I/O for 10 seconds... 00:19:13.018 19:57:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.018 [2024-07-15 19:57:07.243002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.018 [2024-07-15 19:57:07.243069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.018 [2024-07-15 19:57:07.243072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with t[2024-07-15 19:57:07.243085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.018 he state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.018 [2024-07-15 19:57:07.243103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.018 [2024-07-15 19:57:07.243113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.018 [2024-07-15 19:57:07.243122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-15 19:57:07.243132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with tid:0 cdw10:00000000 cdw11:00000000 00:19:13.018 he state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 19:57:07.243142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.018 he state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with t[2024-07-15 19:57:07.243152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacee0 is same whe state(5) to be set 00:19:13.018 ith the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.018 [2024-07-15 19:57:07.243832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.019 [2024-07-15 19:57:07.243840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.019 [2024-07-15 19:57:07.243848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.019 [2024-07-15 19:57:07.243857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.019 [2024-07-15 19:57:07.243866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.019 [2024-07-15 19:57:07.243875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13620 is same with the state(5) to be set 00:19:13.019 [2024-07-15 19:57:07.244773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.244804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.244825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.244836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.244848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.244858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.244870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.244880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.244891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.244901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.244912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.244922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.244933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.244942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.244954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.244963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.244975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.244984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.244995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.245989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.245998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.019 [2024-07-15 19:57:07.246390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.019 [2024-07-15 19:57:07.246401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.246821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.246862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.246883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.246902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.246923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.246943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.246964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.246985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.246997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.247006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.247027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.247048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.247069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.247089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.247116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.020 [2024-07-15 19:57:07.247136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.020 [2024-07-15 19:57:07.247157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(5) to be set 00:19:13.020 [2024-07-15 19:57:07.247180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62752 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62760 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62768 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.020 [2024-07-15 19:57:07.247530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:19:13.020 [2024-07-15 19:57:07.247540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.020 [2024-07-15 19:57:07.247549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.020 [2024-07-15 19:57:07.247557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.021 [2024-07-15 19:57:07.247565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:19:13.021 [2024-07-15 19:57:07.247574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.021 [2024-07-15 19:57:07.247583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.021 [2024-07-15 19:57:07.247590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.021 [2024-07-15 19:57:07.247598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62832 len:8 PRP1 0x0 PRP2 0x0 00:19:13.021 [2024-07-15 19:57:07.247607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.021 [2024-07-15 19:57:07.247616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.021 [2024-07-15 19:57:07.247624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.021 [2024-07-15 19:57:07.247631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62840 len:8 PRP1 0x0 PRP2 0x0 00:19:13.021 [2024-07-15 19:57:07.247640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.021 [2024-07-15 19:57:07.247649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.021 [2024-07-15 19:57:07.247657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.021 [2024-07-15 19:57:07.247664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62848 len:8 PRP1 0x0 PRP2 0x0 00:19:13.021 [2024-07-15 19:57:07.247673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.021 [2024-07-15 19:57:07.247683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.280 19:57:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:13.280 [2024-07-15 19:57:07.264368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.280 [2024-07-15 19:57:07.264424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62856 len:8 PRP1 0x0 PRP2 0x0 00:19:13.280 [2024-07-15 19:57:07.264444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.280 [2024-07-15 19:57:07.264471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.280 [2024-07-15 19:57:07.264483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.280 [2024-07-15 19:57:07.264496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62864 len:8 PRP1 0x0 PRP2 0x0 00:19:13.280 [2024-07-15 19:57:07.264510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.280 [2024-07-15 19:57:07.264525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.280 [2024-07-15 19:57:07.264537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.280 [2024-07-15 19:57:07.264549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62872 len:8 PRP1 0x0 PRP2 0x0 00:19:13.280 [2024-07-15 19:57:07.264563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.280 [2024-07-15 19:57:07.264578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.280 [2024-07-15 19:57:07.264596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.280 [2024-07-15 19:57:07.264609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62880 len:8 PRP1 0x0 PRP2 0x0 00:19:13.280 [2024-07-15 19:57:07.264622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.280 [2024-07-15 19:57:07.264637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.280 [2024-07-15 19:57:07.264648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.280 [2024-07-15 19:57:07.264660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62888 len:8 PRP1 0x0 PRP2 0x0 00:19:13.280 [2024-07-15 19:57:07.264674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.280 [2024-07-15 19:57:07.264688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.280 [2024-07-15 19:57:07.264699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.280 [2024-07-15 19:57:07.264711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62896 len:8 PRP1 0x0 PRP2 0x0 00:19:13.280 [2024-07-15 19:57:07.264725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.280 [2024-07-15 19:57:07.264836] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xffd9a0 was disconnected and freed. reset controller. 00:19:13.280 [2024-07-15 19:57:07.264922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfacee0 (9): Bad file descriptor 00:19:13.280 [2024-07-15 19:57:07.265208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.280 [2024-07-15 19:57:07.265372] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.280 [2024-07-15 19:57:07.265418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfacee0 with addr=10.0.0.2, port=4420 00:19:13.280 [2024-07-15 19:57:07.265438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacee0 is same with the state(5) to be set 00:19:13.280 [2024-07-15 19:57:07.265469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfacee0 (9): Bad file descriptor 00:19:13.280 [2024-07-15 19:57:07.265496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.280 [2024-07-15 19:57:07.265511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:13.280 [2024-07-15 19:57:07.265529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.280 [2024-07-15 19:57:07.265561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.280 [2024-07-15 19:57:07.265579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:14.215 19:57:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.215 [2024-07-15 19:57:08.265742] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.215 [2024-07-15 19:57:08.265795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfacee0 with addr=10.0.0.2, port=4420 00:19:14.215 [2024-07-15 19:57:08.265813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacee0 is same with the state(5) to be set 00:19:14.215 [2024-07-15 19:57:08.265848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfacee0 (9): Bad file descriptor 00:19:14.215 [2024-07-15 19:57:08.265867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.215 [2024-07-15 19:57:08.265887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:14.215 [2024-07-15 19:57:08.265898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.215 [2024-07-15 19:57:08.265934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:14.215 [2024-07-15 19:57:08.265946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:14.473 [2024-07-15 19:57:08.477573] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.473 19:57:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82361 00:19:15.041 [2024-07-15 19:57:09.276887] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:23.155 00:19:23.155 Latency(us) 00:19:23.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.155 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:23.155 Verification LBA range: start 0x0 length 0x4000 00:19:23.155 NVMe0n1 : 10.01 6184.80 24.16 0.00 0.00 20658.41 1653.29 3050402.91 00:19:23.155 =================================================================================================================== 00:19:23.155 Total : 6184.80 24.16 0.00 0.00 20658.41 1653.29 3050402.91 00:19:23.155 0 00:19:23.155 19:57:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82466 00:19:23.155 19:57:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:23.155 19:57:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:23.155 Running I/O for 10 seconds... 00:19:23.155 19:57:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.415 [2024-07-15 19:57:17.405844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.415 [2024-07-15 19:57:17.406435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.415 [2024-07-15 19:57:17.406445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.406980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.406991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.416 [2024-07-15 19:57:17.407737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.416 [2024-07-15 19:57:17.407747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.407984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.407996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.408005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.408025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.408046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.408066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.408086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.408429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.408450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.417 [2024-07-15 19:57:17.408623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.417 [2024-07-15 19:57:17.408956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.408967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102a1f0 is same with the state(5) to be set 00:19:23.417 [2024-07-15 19:57:17.408981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.417 [2024-07-15 19:57:17.408989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.417 [2024-07-15 19:57:17.408998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61904 len:8 PRP1 0x0 PRP2 0x0 00:19:23.417 [2024-07-15 19:57:17.409018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.409083] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x102a1f0 was disconnected and freed. reset controller. 00:19:23.417 [2024-07-15 19:57:17.409164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.417 [2024-07-15 19:57:17.409186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.417 [2024-07-15 19:57:17.409198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.418 [2024-07-15 19:57:17.409208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.418 [2024-07-15 19:57:17.409218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.418 [2024-07-15 19:57:17.409227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.418 [2024-07-15 19:57:17.409237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.418 [2024-07-15 19:57:17.409247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.418 [2024-07-15 19:57:17.409256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacee0 is same with the state(5) to be set 00:19:23.418 [2024-07-15 19:57:17.409498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:23.418 [2024-07-15 19:57:17.409526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfacee0 (9): Bad file descriptor 00:19:23.418 [2024-07-15 19:57:17.409634] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.418 [2024-07-15 19:57:17.409655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfacee0 with addr=10.0.0.2, port=4420 00:19:23.418 [2024-07-15 19:57:17.409666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacee0 is same with the state(5) to be set 00:19:23.418 [2024-07-15 19:57:17.409684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfacee0 (9): Bad file descriptor 00:19:23.418 [2024-07-15 19:57:17.409701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:23.418 [2024-07-15 19:57:17.409710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:23.418 [2024-07-15 19:57:17.409721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.418 [2024-07-15 19:57:17.409741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:23.418 [2024-07-15 19:57:17.409752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:23.418 19:57:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:24.353 [2024-07-15 19:57:18.409912] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.353 [2024-07-15 19:57:18.410208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfacee0 with addr=10.0.0.2, port=4420 00:19:24.353 [2024-07-15 19:57:18.410479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacee0 is same with the state(5) to be set 00:19:24.353 [2024-07-15 19:57:18.410644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfacee0 (9): Bad file descriptor 00:19:24.353 [2024-07-15 19:57:18.410870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.353 [2024-07-15 19:57:18.410927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:24.353 [2024-07-15 19:57:18.411064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.353 [2024-07-15 19:57:18.411123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:24.353 [2024-07-15 19:57:18.411228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.287 [2024-07-15 19:57:19.411468] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.287 [2024-07-15 19:57:19.411764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfacee0 with addr=10.0.0.2, port=4420 00:19:25.287 [2024-07-15 19:57:19.411909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacee0 is same with the state(5) to be set 00:19:25.287 [2024-07-15 19:57:19.412073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfacee0 (9): Bad file descriptor 00:19:25.287 [2024-07-15 19:57:19.412205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.287 [2024-07-15 19:57:19.412293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:25.287 [2024-07-15 19:57:19.412387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.287 [2024-07-15 19:57:19.412418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:25.287 [2024-07-15 19:57:19.412430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.222 [2024-07-15 19:57:20.413758] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.222 [2024-07-15 19:57:20.413828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfacee0 with addr=10.0.0.2, port=4420 00:19:26.222 [2024-07-15 19:57:20.413845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacee0 is same with the state(5) to be set 00:19:26.222 [2024-07-15 19:57:20.414067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfacee0 (9): Bad file descriptor 00:19:26.222 [2024-07-15 19:57:20.414319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.222 [2024-07-15 19:57:20.414334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:26.222 [2024-07-15 19:57:20.414345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.222 [2024-07-15 19:57:20.418083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.222 [2024-07-15 19:57:20.418113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.222 19:57:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.480 [2024-07-15 19:57:20.651414] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.480 19:57:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82466 00:19:27.415 [2024-07-15 19:57:21.449774] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:32.680 00:19:32.680 Latency(us) 00:19:32.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.680 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:32.680 Verification LBA range: start 0x0 length 0x4000 00:19:32.680 NVMe0n1 : 10.01 5358.00 20.93 4013.01 0.00 13631.14 625.57 3019898.88 00:19:32.680 =================================================================================================================== 00:19:32.680 Total : 5358.00 20.93 4013.01 0.00 13631.14 0.00 3019898.88 00:19:32.680 0 00:19:32.680 19:57:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82337 00:19:32.680 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82337 ']' 00:19:32.680 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82337 00:19:32.680 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:32.680 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.680 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82337 00:19:32.680 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:32.680 killing process with pid 82337 00:19:32.680 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.680 00:19:32.680 Latency(us) 00:19:32.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.681 =================================================================================================================== 00:19:32.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82337' 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82337 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82337 00:19:32.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82580 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82580 /var/tmp/bdevperf.sock 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82580 ']' 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.681 19:57:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.681 [2024-07-15 19:57:26.608344] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:32.681 [2024-07-15 19:57:26.608676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82580 ] 00:19:32.681 [2024-07-15 19:57:26.744904] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.681 [2024-07-15 19:57:26.864738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.681 [2024-07-15 19:57:26.918456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:33.611 19:57:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.611 19:57:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:33.611 19:57:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82580 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:33.611 19:57:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82596 00:19:33.611 19:57:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:33.868 19:57:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:34.124 NVMe0n1 00:19:34.124 19:57:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82632 00:19:34.124 19:57:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.124 19:57:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:34.124 Running I/O for 10 seconds... 00:19:35.055 19:57:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.313 [2024-07-15 19:57:29.368146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.368982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.368992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.313 [2024-07-15 19:57:29.369561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.313 [2024-07-15 19:57:29.369570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.369983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.369992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.314 [2024-07-15 19:57:29.370908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.314 [2024-07-15 19:57:29.370917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.315 [2024-07-15 19:57:29.370928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.315 [2024-07-15 19:57:29.370937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.315 [2024-07-15 19:57:29.370951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9640 is same with the state(5) to be set 00:19:35.315 [2024-07-15 19:57:29.370964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.315 [2024-07-15 19:57:29.370971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.315 [2024-07-15 19:57:29.370979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87064 len:8 PRP1 0x0 PRP2 0x0 00:19:35.315 [2024-07-15 19:57:29.370995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.315 [2024-07-15 19:57:29.371046] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ae9640 was disconnected and freed. reset controller. 00:19:35.315 [2024-07-15 19:57:29.371325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.315 [2024-07-15 19:57:29.371410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a98da0 (9): Bad file descriptor 00:19:35.315 [2024-07-15 19:57:29.371515] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.315 [2024-07-15 19:57:29.371536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a98da0 with addr=10.0.0.2, port=4420 00:19:35.315 [2024-07-15 19:57:29.371547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98da0 is same with the state(5) to be set 00:19:35.315 [2024-07-15 19:57:29.371565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a98da0 (9): Bad file descriptor 00:19:35.315 [2024-07-15 19:57:29.371581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.315 [2024-07-15 19:57:29.371591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.315 [2024-07-15 19:57:29.371602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.315 [2024-07-15 19:57:29.371623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.315 [2024-07-15 19:57:29.371639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.315 19:57:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82632 00:19:37.219 [2024-07-15 19:57:31.371879] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.219 [2024-07-15 19:57:31.371949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a98da0 with addr=10.0.0.2, port=4420 00:19:37.219 [2024-07-15 19:57:31.371967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98da0 is same with the state(5) to be set 00:19:37.219 [2024-07-15 19:57:31.371994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a98da0 (9): Bad file descriptor 00:19:37.219 [2024-07-15 19:57:31.372014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:37.219 [2024-07-15 19:57:31.372025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:37.219 [2024-07-15 19:57:31.372036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.219 [2024-07-15 19:57:31.372063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.219 [2024-07-15 19:57:31.372075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:39.743 [2024-07-15 19:57:33.372308] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:39.743 [2024-07-15 19:57:33.372374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a98da0 with addr=10.0.0.2, port=4420 00:19:39.743 [2024-07-15 19:57:33.372392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98da0 is same with the state(5) to be set 00:19:39.743 [2024-07-15 19:57:33.372418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a98da0 (9): Bad file descriptor 00:19:39.743 [2024-07-15 19:57:33.372437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:39.743 [2024-07-15 19:57:33.372448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:39.743 [2024-07-15 19:57:33.372459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:39.743 [2024-07-15 19:57:33.372488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:39.743 [2024-07-15 19:57:33.372500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.640 [2024-07-15 19:57:35.372649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:41.640 [2024-07-15 19:57:35.372706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:41.640 [2024-07-15 19:57:35.372718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:41.640 [2024-07-15 19:57:35.372729] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:41.640 [2024-07-15 19:57:35.372764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.236 00:19:42.236 Latency(us) 00:19:42.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.236 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:42.236 NVMe0n1 : 8.07 1936.32 7.56 15.87 0.00 65514.89 7983.48 7015926.69 00:19:42.236 =================================================================================================================== 00:19:42.236 Total : 1936.32 7.56 15.87 0.00 65514.89 7983.48 7015926.69 00:19:42.236 0 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:42.236 Attaching 5 probes... 00:19:42.236 1223.284461: reset bdev controller NVMe0 00:19:42.236 1223.438411: reconnect bdev controller NVMe0 00:19:42.236 3223.716169: reconnect delay bdev controller NVMe0 00:19:42.236 3223.739263: reconnect bdev controller NVMe0 00:19:42.236 5224.163571: reconnect delay bdev controller NVMe0 00:19:42.236 5224.184622: reconnect bdev controller NVMe0 00:19:42.236 7224.592818: reconnect delay bdev controller NVMe0 00:19:42.236 7224.612707: reconnect bdev controller NVMe0 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82596 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82580 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82580 ']' 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82580 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82580 00:19:42.236 killing process with pid 82580 00:19:42.236 Received shutdown signal, test time was about 8.129493 seconds 00:19:42.236 00:19:42.236 Latency(us) 00:19:42.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.236 =================================================================================================================== 00:19:42.236 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.236 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:42.237 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:42.237 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82580' 00:19:42.237 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82580 00:19:42.237 19:57:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82580 00:19:42.493 19:57:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.750 19:57:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:42.750 19:57:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:42.750 19:57:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:42.750 19:57:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:42.750 19:57:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.750 19:57:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:42.750 19:57:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.750 19:57:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.750 rmmod nvme_tcp 00:19:42.750 rmmod nvme_fabrics 00:19:43.006 rmmod nvme_keyring 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82148 ']' 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82148 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82148 ']' 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82148 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82148 00:19:43.006 killing process with pid 82148 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82148' 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82148 00:19:43.006 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82148 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:43.263 ************************************ 00:19:43.263 END TEST nvmf_timeout 00:19:43.263 ************************************ 00:19:43.263 00:19:43.263 real 0m46.980s 00:19:43.263 user 2m17.939s 00:19:43.263 sys 0m5.715s 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.263 19:57:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:43.263 19:57:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:43.263 19:57:37 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:19:43.263 19:57:37 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:19:43.263 19:57:37 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.263 19:57:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.263 19:57:37 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:19:43.263 ************************************ 00:19:43.263 END TEST nvmf_tcp 00:19:43.263 ************************************ 00:19:43.263 00:19:43.263 real 12m11.705s 00:19:43.263 user 29m41.972s 00:19:43.263 sys 3m3.181s 00:19:43.263 19:57:37 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.263 19:57:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.263 19:57:37 -- common/autotest_common.sh@1142 -- # return 0 00:19:43.263 19:57:37 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:19:43.263 19:57:37 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:43.263 19:57:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:43.263 19:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.263 19:57:37 -- common/autotest_common.sh@10 -- # set +x 00:19:43.263 ************************************ 00:19:43.263 START TEST nvmf_dif 00:19:43.263 ************************************ 00:19:43.263 19:57:37 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:43.543 * Looking for test storage... 00:19:43.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:43.543 19:57:37 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.543 19:57:37 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:43.543 19:57:37 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.543 19:57:37 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.543 19:57:37 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.543 19:57:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.544 19:57:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.544 19:57:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.544 19:57:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:43.544 19:57:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.544 19:57:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:43.544 19:57:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:43.544 19:57:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:43.544 19:57:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:43.544 19:57:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.544 19:57:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:43.544 19:57:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:43.544 Cannot find device "nvmf_tgt_br" 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.544 Cannot find device "nvmf_tgt_br2" 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:43.544 Cannot find device "nvmf_tgt_br" 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:43.544 Cannot find device "nvmf_tgt_br2" 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:43.544 19:57:37 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:43.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:43.802 00:19:43.802 --- 10.0.0.2 ping statistics --- 00:19:43.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.802 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:43.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:43.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:19:43.802 00:19:43.802 --- 10.0.0.3 ping statistics --- 00:19:43.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.802 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:43.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:19:43.802 00:19:43.802 --- 10.0.0.1 ping statistics --- 00:19:43.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.802 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:43.802 19:57:37 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:44.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:44.060 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:44.060 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.060 19:57:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:44.060 19:57:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.060 19:57:38 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.060 19:57:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:44.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83073 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83073 00:19:44.060 19:57:38 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:44.060 19:57:38 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83073 ']' 00:19:44.060 19:57:38 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.060 19:57:38 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.060 19:57:38 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.060 19:57:38 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.060 19:57:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:44.318 [2024-07-15 19:57:38.326098] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:44.318 [2024-07-15 19:57:38.326389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.318 [2024-07-15 19:57:38.470234] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.575 [2024-07-15 19:57:38.597934] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.575 [2024-07-15 19:57:38.598348] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.575 [2024-07-15 19:57:38.598519] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.575 [2024-07-15 19:57:38.598666] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.575 [2024-07-15 19:57:38.598906] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.575 [2024-07-15 19:57:38.598958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.575 [2024-07-15 19:57:38.655955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:19:45.142 19:57:39 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:45.142 19:57:39 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.142 19:57:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:45.142 19:57:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:45.142 [2024-07-15 19:57:39.317869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.142 19:57:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.142 19:57:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:45.142 ************************************ 00:19:45.142 START TEST fio_dif_1_default 00:19:45.142 ************************************ 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:45.142 bdev_null0 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.142 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:45.143 [2024-07-15 19:57:39.369994] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.143 { 00:19:45.143 "params": { 00:19:45.143 "name": "Nvme$subsystem", 00:19:45.143 "trtype": "$TEST_TRANSPORT", 00:19:45.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.143 "adrfam": "ipv4", 00:19:45.143 "trsvcid": "$NVMF_PORT", 00:19:45.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.143 "hdgst": ${hdgst:-false}, 00:19:45.143 "ddgst": ${ddgst:-false} 00:19:45.143 }, 00:19:45.143 "method": "bdev_nvme_attach_controller" 00:19:45.143 } 00:19:45.143 EOF 00:19:45.143 )") 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:45.143 19:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:45.143 "params": { 00:19:45.143 "name": "Nvme0", 00:19:45.143 "trtype": "tcp", 00:19:45.143 "traddr": "10.0.0.2", 00:19:45.143 "adrfam": "ipv4", 00:19:45.143 "trsvcid": "4420", 00:19:45.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:45.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:45.143 "hdgst": false, 00:19:45.143 "ddgst": false 00:19:45.143 }, 00:19:45.143 "method": "bdev_nvme_attach_controller" 00:19:45.143 }' 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:45.401 19:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:45.401 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:45.401 fio-3.35 00:19:45.401 Starting 1 thread 00:19:57.657 00:19:57.657 filename0: (groupid=0, jobs=1): err= 0: pid=83141: Mon Jul 15 19:57:50 2024 00:19:57.657 read: IOPS=8753, BW=34.2MiB/s (35.9MB/s)(342MiB/10001msec) 00:19:57.657 slat (usec): min=5, max=127, avg= 8.68, stdev= 3.37 00:19:57.657 clat (usec): min=310, max=1313, avg=431.59, stdev=34.04 00:19:57.657 lat (usec): min=316, max=1323, avg=440.27, stdev=34.78 00:19:57.657 clat percentiles (usec): 00:19:57.657 | 1.00th=[ 347], 5.00th=[ 375], 10.00th=[ 392], 20.00th=[ 408], 00:19:57.657 | 30.00th=[ 416], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 437], 00:19:57.657 | 70.00th=[ 445], 80.00th=[ 457], 90.00th=[ 474], 95.00th=[ 490], 00:19:57.657 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 562], 99.95th=[ 570], 00:19:57.657 | 99.99th=[ 644] 00:19:57.657 bw ( KiB/s): min=33760, max=39232, per=100.00%, avg=35073.68, stdev=1100.92, samples=19 00:19:57.657 iops : min= 8440, max= 9808, avg=8768.42, stdev=275.23, samples=19 00:19:57.657 lat (usec) : 500=96.98%, 750=3.02% 00:19:57.657 lat (msec) : 2=0.01% 00:19:57.657 cpu : usr=84.55%, sys=13.59%, ctx=81, majf=0, minf=0 00:19:57.657 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.657 issued rwts: total=87540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.657 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:57.657 00:19:57.657 Run status group 0 (all jobs): 00:19:57.657 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=342MiB (359MB), run=10001-10001msec 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:57.657 ************************************ 00:19:57.657 END TEST fio_dif_1_default 00:19:57.657 ************************************ 00:19:57.657 19:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.657 00:19:57.657 real 0m11.024s 00:19:57.657 user 0m9.087s 00:19:57.658 sys 0m1.643s 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 19:57:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:57.658 19:57:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:57.658 19:57:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:57.658 19:57:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 ************************************ 00:19:57.658 START TEST fio_dif_1_multi_subsystems 00:19:57.658 ************************************ 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 bdev_null0 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 [2024-07-15 19:57:50.444119] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 bdev_null1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.658 { 00:19:57.658 "params": { 00:19:57.658 "name": "Nvme$subsystem", 00:19:57.658 "trtype": "$TEST_TRANSPORT", 00:19:57.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.658 "adrfam": "ipv4", 00:19:57.658 "trsvcid": "$NVMF_PORT", 00:19:57.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.658 "hdgst": ${hdgst:-false}, 00:19:57.658 "ddgst": ${ddgst:-false} 00:19:57.658 }, 00:19:57.658 "method": "bdev_nvme_attach_controller" 00:19:57.658 } 00:19:57.658 EOF 00:19:57.658 )") 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.658 { 00:19:57.658 "params": { 00:19:57.658 "name": "Nvme$subsystem", 00:19:57.658 "trtype": "$TEST_TRANSPORT", 00:19:57.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.658 "adrfam": "ipv4", 00:19:57.658 "trsvcid": "$NVMF_PORT", 00:19:57.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.658 "hdgst": ${hdgst:-false}, 00:19:57.658 "ddgst": ${ddgst:-false} 00:19:57.658 }, 00:19:57.658 "method": "bdev_nvme_attach_controller" 00:19:57.658 } 00:19:57.658 EOF 00:19:57.658 )") 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:57.658 "params": { 00:19:57.658 "name": "Nvme0", 00:19:57.658 "trtype": "tcp", 00:19:57.658 "traddr": "10.0.0.2", 00:19:57.658 "adrfam": "ipv4", 00:19:57.658 "trsvcid": "4420", 00:19:57.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:57.658 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:57.658 "hdgst": false, 00:19:57.658 "ddgst": false 00:19:57.658 }, 00:19:57.658 "method": "bdev_nvme_attach_controller" 00:19:57.658 },{ 00:19:57.658 "params": { 00:19:57.658 "name": "Nvme1", 00:19:57.658 "trtype": "tcp", 00:19:57.658 "traddr": "10.0.0.2", 00:19:57.658 "adrfam": "ipv4", 00:19:57.658 "trsvcid": "4420", 00:19:57.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.658 "hdgst": false, 00:19:57.658 "ddgst": false 00:19:57.658 }, 00:19:57.658 "method": "bdev_nvme_attach_controller" 00:19:57.658 }' 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.658 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:57.659 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:57.659 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:57.659 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:57.659 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:57.659 19:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:57.659 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:57.659 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:57.659 fio-3.35 00:19:57.659 Starting 2 threads 00:20:07.645 00:20:07.645 filename0: (groupid=0, jobs=1): err= 0: pid=83300: Mon Jul 15 19:58:01 2024 00:20:07.645 read: IOPS=4799, BW=18.7MiB/s (19.7MB/s)(187MiB/10001msec) 00:20:07.645 slat (nsec): min=6545, max=66518, avg=13572.99, stdev=5123.29 00:20:07.645 clat (usec): min=388, max=1936, avg=796.94, stdev=66.85 00:20:07.645 lat (usec): min=395, max=1969, avg=810.51, stdev=68.03 00:20:07.645 clat percentiles (usec): 00:20:07.645 | 1.00th=[ 652], 5.00th=[ 685], 10.00th=[ 709], 20.00th=[ 742], 00:20:07.645 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 816], 00:20:07.645 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 906], 00:20:07.645 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 1012], 00:20:07.645 | 99.99th=[ 1090] 00:20:07.645 bw ( KiB/s): min=18304, max=20640, per=50.10%, avg=19233.68, stdev=739.28, samples=19 00:20:07.645 iops : min= 4576, max= 5160, avg=4808.42, stdev=184.82, samples=19 00:20:07.645 lat (usec) : 500=0.02%, 750=24.92%, 1000=74.97% 00:20:07.645 lat (msec) : 2=0.09% 00:20:07.645 cpu : usr=89.97%, sys=8.69%, ctx=14, majf=0, minf=9 00:20:07.645 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.645 issued rwts: total=47996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.645 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:07.645 filename1: (groupid=0, jobs=1): err= 0: pid=83301: Mon Jul 15 19:58:01 2024 00:20:07.645 read: IOPS=4797, BW=18.7MiB/s (19.7MB/s)(187MiB/10001msec) 00:20:07.645 slat (nsec): min=6358, max=93297, avg=13791.67, stdev=5257.08 00:20:07.645 clat (usec): min=610, max=1938, avg=795.54, stdev=62.12 00:20:07.645 lat (usec): min=621, max=1964, avg=809.33, stdev=62.85 00:20:07.645 clat percentiles (usec): 00:20:07.645 | 1.00th=[ 660], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 742], 00:20:07.645 | 30.00th=[ 766], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 816], 00:20:07.645 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 898], 00:20:07.645 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 996], 99.95th=[ 1012], 00:20:07.645 | 99.99th=[ 1893] 00:20:07.645 bw ( KiB/s): min=18304, max=20608, per=50.09%, avg=19228.63, stdev=728.69, samples=19 00:20:07.645 iops : min= 4576, max= 5152, avg=4807.16, stdev=182.17, samples=19 00:20:07.645 lat (usec) : 750=23.01%, 1000=76.92% 00:20:07.645 lat (msec) : 2=0.07% 00:20:07.645 cpu : usr=89.75%, sys=8.87%, ctx=47, majf=0, minf=0 00:20:07.645 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.645 issued rwts: total=47984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.645 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:07.645 00:20:07.645 Run status group 0 (all jobs): 00:20:07.645 READ: bw=37.5MiB/s (39.3MB/s), 18.7MiB/s-18.7MiB/s (19.7MB/s-19.7MB/s), io=375MiB (393MB), run=10001-10001msec 00:20:07.645 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:07.645 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:07.645 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:07.645 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 ************************************ 00:20:07.646 END TEST fio_dif_1_multi_subsystems 00:20:07.646 ************************************ 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.646 00:20:07.646 real 0m11.195s 00:20:07.646 user 0m18.760s 00:20:07.646 sys 0m2.080s 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 19:58:01 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:07.646 19:58:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:07.646 19:58:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:07.646 19:58:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 ************************************ 00:20:07.646 START TEST fio_dif_rand_params 00:20:07.646 ************************************ 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 bdev_null0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.646 [2024-07-15 19:58:01.690417] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:07.646 { 00:20:07.646 "params": { 00:20:07.646 "name": "Nvme$subsystem", 00:20:07.646 "trtype": "$TEST_TRANSPORT", 00:20:07.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.646 "adrfam": "ipv4", 00:20:07.646 "trsvcid": "$NVMF_PORT", 00:20:07.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.646 "hdgst": ${hdgst:-false}, 00:20:07.646 "ddgst": ${ddgst:-false} 00:20:07.646 }, 00:20:07.646 "method": "bdev_nvme_attach_controller" 00:20:07.646 } 00:20:07.646 EOF 00:20:07.646 )") 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:07.646 "params": { 00:20:07.646 "name": "Nvme0", 00:20:07.646 "trtype": "tcp", 00:20:07.646 "traddr": "10.0.0.2", 00:20:07.646 "adrfam": "ipv4", 00:20:07.646 "trsvcid": "4420", 00:20:07.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:07.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:07.646 "hdgst": false, 00:20:07.646 "ddgst": false 00:20:07.646 }, 00:20:07.646 "method": "bdev_nvme_attach_controller" 00:20:07.646 }' 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:07.646 19:58:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.912 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:07.912 ... 00:20:07.912 fio-3.35 00:20:07.912 Starting 3 threads 00:20:14.472 00:20:14.472 filename0: (groupid=0, jobs=1): err= 0: pid=83457: Mon Jul 15 19:58:07 2024 00:20:14.472 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(157MiB/5009msec) 00:20:14.472 slat (nsec): min=6953, max=71595, avg=12104.37, stdev=6393.06 00:20:14.472 clat (usec): min=11156, max=12702, avg=11950.32, stdev=312.19 00:20:14.472 lat (usec): min=11166, max=12717, avg=11962.43, stdev=311.71 00:20:14.472 clat percentiles (usec): 00:20:14.472 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:20:14.472 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:20:14.472 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12387], 00:20:14.472 | 99.00th=[12649], 99.50th=[12649], 99.90th=[12649], 99.95th=[12649], 00:20:14.472 | 99.99th=[12649] 00:20:14.472 bw ( KiB/s): min=31488, max=33024, per=33.31%, avg=32019.10, stdev=629.97, samples=10 00:20:14.472 iops : min= 246, max= 258, avg=250.10, stdev= 4.91, samples=10 00:20:14.472 lat (msec) : 20=100.00% 00:20:14.472 cpu : usr=91.29%, sys=8.01%, ctx=47, majf=0, minf=9 00:20:14.472 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.472 issued rwts: total=1254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.472 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:14.472 filename0: (groupid=0, jobs=1): err= 0: pid=83458: Mon Jul 15 19:58:07 2024 00:20:14.472 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(157MiB/5007msec) 00:20:14.472 slat (nsec): min=6793, max=41199, avg=10907.64, stdev=4740.45 00:20:14.472 clat (usec): min=8406, max=14171, avg=11950.88, stdev=374.88 00:20:14.472 lat (usec): min=8415, max=14198, avg=11961.78, stdev=375.23 00:20:14.472 clat percentiles (usec): 00:20:14.472 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:20:14.472 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:20:14.472 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12387], 00:20:14.472 | 99.00th=[12649], 99.50th=[12911], 99.90th=[14222], 99.95th=[14222], 00:20:14.472 | 99.99th=[14222] 00:20:14.472 bw ( KiB/s): min=31488, max=33792, per=33.31%, avg=32025.60, stdev=728.59, samples=10 00:20:14.472 iops : min= 246, max= 264, avg=250.20, stdev= 5.69, samples=10 00:20:14.472 lat (msec) : 10=0.24%, 20=99.76% 00:20:14.472 cpu : usr=91.47%, sys=8.01%, ctx=6, majf=0, minf=9 00:20:14.472 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.472 issued rwts: total=1254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.472 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:14.472 filename0: (groupid=0, jobs=1): err= 0: pid=83459: Mon Jul 15 19:58:07 2024 00:20:14.472 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(157MiB/5008msec) 00:20:14.472 slat (nsec): min=6677, max=42971, avg=10216.00, stdev=4027.65 00:20:14.472 clat (usec): min=9943, max=12942, avg=11953.66, stdev=333.29 00:20:14.472 lat (usec): min=9951, max=12954, avg=11963.87, stdev=333.51 00:20:14.472 clat percentiles (usec): 00:20:14.472 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:20:14.472 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:20:14.472 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12518], 00:20:14.472 | 99.00th=[12649], 99.50th=[12780], 99.90th=[12911], 99.95th=[12911], 00:20:14.472 | 99.99th=[12911] 00:20:14.472 bw ( KiB/s): min=31488, max=33024, per=33.31%, avg=32025.60, stdev=632.27, samples=10 00:20:14.472 iops : min= 246, max= 258, avg=250.20, stdev= 4.94, samples=10 00:20:14.472 lat (msec) : 10=0.24%, 20=99.76% 00:20:14.472 cpu : usr=91.39%, sys=8.09%, ctx=7, majf=0, minf=9 00:20:14.472 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.472 issued rwts: total=1254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.472 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:14.472 00:20:14.472 Run status group 0 (all jobs): 00:20:14.472 READ: bw=93.9MiB/s (98.4MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=470MiB (493MB), run=5007-5009msec 00:20:14.472 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:14.472 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:14.472 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 bdev_null0 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 [2024-07-15 19:58:07.746871] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 bdev_null1 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 bdev_null2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:14.473 { 00:20:14.473 "params": { 00:20:14.473 "name": "Nvme$subsystem", 00:20:14.473 "trtype": "$TEST_TRANSPORT", 00:20:14.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.473 "adrfam": "ipv4", 00:20:14.473 "trsvcid": "$NVMF_PORT", 00:20:14.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.473 "hdgst": ${hdgst:-false}, 00:20:14.473 "ddgst": ${ddgst:-false} 00:20:14.473 }, 00:20:14.473 "method": "bdev_nvme_attach_controller" 00:20:14.473 } 00:20:14.473 EOF 00:20:14.473 )") 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:14.473 { 00:20:14.473 "params": { 00:20:14.473 "name": "Nvme$subsystem", 00:20:14.473 "trtype": "$TEST_TRANSPORT", 00:20:14.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.473 "adrfam": "ipv4", 00:20:14.473 "trsvcid": "$NVMF_PORT", 00:20:14.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.473 "hdgst": ${hdgst:-false}, 00:20:14.473 "ddgst": ${ddgst:-false} 00:20:14.473 }, 00:20:14.473 "method": "bdev_nvme_attach_controller" 00:20:14.473 } 00:20:14.473 EOF 00:20:14.473 )") 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:14.473 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:14.474 { 00:20:14.474 "params": { 00:20:14.474 "name": "Nvme$subsystem", 00:20:14.474 "trtype": "$TEST_TRANSPORT", 00:20:14.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.474 "adrfam": "ipv4", 00:20:14.474 "trsvcid": "$NVMF_PORT", 00:20:14.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.474 "hdgst": ${hdgst:-false}, 00:20:14.474 "ddgst": ${ddgst:-false} 00:20:14.474 }, 00:20:14.474 "method": "bdev_nvme_attach_controller" 00:20:14.474 } 00:20:14.474 EOF 00:20:14.474 )") 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:14.474 "params": { 00:20:14.474 "name": "Nvme0", 00:20:14.474 "trtype": "tcp", 00:20:14.474 "traddr": "10.0.0.2", 00:20:14.474 "adrfam": "ipv4", 00:20:14.474 "trsvcid": "4420", 00:20:14.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:14.474 "hdgst": false, 00:20:14.474 "ddgst": false 00:20:14.474 }, 00:20:14.474 "method": "bdev_nvme_attach_controller" 00:20:14.474 },{ 00:20:14.474 "params": { 00:20:14.474 "name": "Nvme1", 00:20:14.474 "trtype": "tcp", 00:20:14.474 "traddr": "10.0.0.2", 00:20:14.474 "adrfam": "ipv4", 00:20:14.474 "trsvcid": "4420", 00:20:14.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.474 "hdgst": false, 00:20:14.474 "ddgst": false 00:20:14.474 }, 00:20:14.474 "method": "bdev_nvme_attach_controller" 00:20:14.474 },{ 00:20:14.474 "params": { 00:20:14.474 "name": "Nvme2", 00:20:14.474 "trtype": "tcp", 00:20:14.474 "traddr": "10.0.0.2", 00:20:14.474 "adrfam": "ipv4", 00:20:14.474 "trsvcid": "4420", 00:20:14.474 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:14.474 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:14.474 "hdgst": false, 00:20:14.474 "ddgst": false 00:20:14.474 }, 00:20:14.474 "method": "bdev_nvme_attach_controller" 00:20:14.474 }' 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:14.474 19:58:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.474 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:14.474 ... 00:20:14.474 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:14.474 ... 00:20:14.474 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:14.474 ... 00:20:14.474 fio-3.35 00:20:14.474 Starting 24 threads 00:20:26.744 00:20:26.744 filename0: (groupid=0, jobs=1): err= 0: pid=83554: Mon Jul 15 19:58:18 2024 00:20:26.744 read: IOPS=205, BW=821KiB/s (841kB/s)(8236KiB/10029msec) 00:20:26.744 slat (usec): min=3, max=8025, avg=31.15, stdev=352.40 00:20:26.744 clat (msec): min=33, max=148, avg=77.72, stdev=20.88 00:20:26.744 lat (msec): min=33, max=148, avg=77.75, stdev=20.87 00:20:26.744 clat percentiles (msec): 00:20:26.744 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:26.744 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:20:26.744 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 110], 00:20:26.744 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 140], 99.95th=[ 148], 00:20:26.744 | 99.99th=[ 148] 00:20:26.744 bw ( KiB/s): min= 573, max= 1032, per=4.28%, avg=819.45, stdev=119.96, samples=20 00:20:26.744 iops : min= 143, max= 258, avg=204.85, stdev=30.02, samples=20 00:20:26.744 lat (msec) : 50=14.81%, 100=68.53%, 250=16.66% 00:20:26.744 cpu : usr=31.96%, sys=1.65%, ctx=921, majf=0, minf=9 00:20:26.744 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:26.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.744 filename0: (groupid=0, jobs=1): err= 0: pid=83555: Mon Jul 15 19:58:18 2024 00:20:26.744 read: IOPS=191, BW=764KiB/s (782kB/s)(7648KiB/10010msec) 00:20:26.744 slat (usec): min=4, max=8041, avg=28.33, stdev=294.53 00:20:26.744 clat (msec): min=20, max=213, avg=83.61, stdev=25.83 00:20:26.744 lat (msec): min=20, max=213, avg=83.64, stdev=25.84 00:20:26.744 clat percentiles (msec): 00:20:26.744 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:20:26.744 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 94], 00:20:26.744 | 70.00th=[ 100], 80.00th=[ 107], 90.00th=[ 111], 95.00th=[ 121], 00:20:26.744 | 99.00th=[ 153], 99.50th=[ 190], 99.90th=[ 213], 99.95th=[ 213], 00:20:26.744 | 99.99th=[ 213] 00:20:26.744 bw ( KiB/s): min= 496, max= 992, per=3.97%, avg=760.21, stdev=176.81, samples=19 00:20:26.744 iops : min= 124, max= 248, avg=190.05, stdev=44.20, samples=19 00:20:26.744 lat (msec) : 50=13.18%, 100=59.83%, 250=26.99% 00:20:26.744 cpu : usr=31.70%, sys=1.48%, ctx=925, majf=0, minf=9 00:20:26.744 IO depths : 1=0.1%, 2=2.5%, 4=10.0%, 8=72.9%, 16=14.5%, 32=0.0%, >=64=0.0% 00:20:26.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 complete : 0=0.0%, 4=89.8%, 8=8.0%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.744 filename0: (groupid=0, jobs=1): err= 0: pid=83556: Mon Jul 15 19:58:18 2024 00:20:26.744 read: IOPS=200, BW=803KiB/s (822kB/s)(8092KiB/10075msec) 00:20:26.744 slat (usec): min=3, max=6023, avg=20.36, stdev=160.95 00:20:26.744 clat (msec): min=4, max=151, avg=79.46, stdev=28.21 00:20:26.744 lat (msec): min=4, max=151, avg=79.48, stdev=28.21 00:20:26.744 clat percentiles (msec): 00:20:26.744 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 47], 20.00th=[ 58], 00:20:26.744 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 87], 00:20:26.744 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 120], 00:20:26.744 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 153], 00:20:26.744 | 99.99th=[ 153] 00:20:26.744 bw ( KiB/s): min= 528, max= 1507, per=4.19%, avg=802.15, stdev=224.69, samples=20 00:20:26.744 iops : min= 132, max= 376, avg=200.50, stdev=56.05, samples=20 00:20:26.744 lat (msec) : 10=4.75%, 20=0.79%, 50=10.83%, 100=55.46%, 250=28.18% 00:20:26.744 cpu : usr=36.01%, sys=1.82%, ctx=1054, majf=0, minf=9 00:20:26.744 IO depths : 1=0.1%, 2=2.6%, 4=10.2%, 8=72.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:26.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 complete : 0=0.0%, 4=90.3%, 8=7.5%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.744 filename0: (groupid=0, jobs=1): err= 0: pid=83557: Mon Jul 15 19:58:18 2024 00:20:26.744 read: IOPS=189, BW=758KiB/s (776kB/s)(7608KiB/10041msec) 00:20:26.744 slat (usec): min=4, max=8054, avg=20.98, stdev=184.49 00:20:26.744 clat (msec): min=35, max=157, avg=84.24, stdev=21.97 00:20:26.744 lat (msec): min=35, max=157, avg=84.26, stdev=21.97 00:20:26.744 clat percentiles (msec): 00:20:26.744 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 64], 00:20:26.744 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 95], 00:20:26.744 | 70.00th=[ 97], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 117], 00:20:26.744 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:20:26.744 | 99.99th=[ 157] 00:20:26.744 bw ( KiB/s): min= 528, max= 976, per=3.96%, avg=757.20, stdev=141.51, samples=20 00:20:26.744 iops : min= 132, max= 244, avg=189.30, stdev=35.38, samples=20 00:20:26.744 lat (msec) : 50=8.89%, 100=64.83%, 250=26.29% 00:20:26.744 cpu : usr=34.69%, sys=1.63%, ctx=1048, majf=0, minf=9 00:20:26.744 IO depths : 1=0.1%, 2=2.3%, 4=9.0%, 8=73.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:26.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 complete : 0=0.0%, 4=90.0%, 8=8.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.744 filename0: (groupid=0, jobs=1): err= 0: pid=83558: Mon Jul 15 19:58:18 2024 00:20:26.744 read: IOPS=207, BW=831KiB/s (851kB/s)(8320KiB/10014msec) 00:20:26.744 slat (usec): min=4, max=8024, avg=26.53, stdev=230.45 00:20:26.744 clat (msec): min=25, max=170, avg=76.90, stdev=21.70 00:20:26.744 lat (msec): min=25, max=170, avg=76.92, stdev=21.70 00:20:26.744 clat percentiles (msec): 00:20:26.744 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:20:26.744 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 82], 00:20:26.744 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:20:26.744 | 99.00th=[ 123], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 171], 00:20:26.744 | 99.99th=[ 171] 00:20:26.744 bw ( KiB/s): min= 512, max= 1024, per=4.31%, avg=825.60, stdev=139.28, samples=20 00:20:26.744 iops : min= 128, max= 256, avg=206.40, stdev=34.82, samples=20 00:20:26.744 lat (msec) : 50=14.76%, 100=68.70%, 250=16.54% 00:20:26.744 cpu : usr=38.39%, sys=2.05%, ctx=962, majf=0, minf=9 00:20:26.744 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=81.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:26.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.744 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.745 filename0: (groupid=0, jobs=1): err= 0: pid=83559: Mon Jul 15 19:58:18 2024 00:20:26.745 read: IOPS=196, BW=787KiB/s (806kB/s)(7896KiB/10037msec) 00:20:26.745 slat (usec): min=6, max=4063, avg=20.99, stdev=124.01 00:20:26.745 clat (msec): min=34, max=195, avg=81.14, stdev=24.44 00:20:26.745 lat (msec): min=34, max=195, avg=81.16, stdev=24.44 00:20:26.745 clat percentiles (msec): 00:20:26.745 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:20:26.745 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 86], 00:20:26.745 | 70.00th=[ 97], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 120], 00:20:26.745 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 197], 99.95th=[ 197], 00:20:26.745 | 99.99th=[ 197] 00:20:26.745 bw ( KiB/s): min= 512, max= 1000, per=4.10%, avg=785.70, stdev=175.54, samples=20 00:20:26.745 iops : min= 128, max= 250, avg=196.40, stdev=43.91, samples=20 00:20:26.745 lat (msec) : 50=15.55%, 100=59.88%, 250=24.57% 00:20:26.745 cpu : usr=41.89%, sys=1.90%, ctx=1341, majf=0, minf=9 00:20:26.745 IO depths : 1=0.1%, 2=2.2%, 4=8.7%, 8=74.5%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:26.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 complete : 0=0.0%, 4=89.3%, 8=8.8%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.745 filename0: (groupid=0, jobs=1): err= 0: pid=83560: Mon Jul 15 19:58:18 2024 00:20:26.745 read: IOPS=201, BW=804KiB/s (824kB/s)(8100KiB/10072msec) 00:20:26.745 slat (usec): min=3, max=4070, avg=20.85, stdev=155.31 00:20:26.745 clat (msec): min=34, max=149, avg=79.33, stdev=22.11 00:20:26.745 lat (msec): min=34, max=149, avg=79.35, stdev=22.11 00:20:26.745 clat percentiles (msec): 00:20:26.745 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 57], 00:20:26.745 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:20:26.745 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 115], 00:20:26.745 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 148], 00:20:26.745 | 99.99th=[ 150] 00:20:26.745 bw ( KiB/s): min= 528, max= 1024, per=4.20%, avg=803.60, stdev=142.79, samples=20 00:20:26.745 iops : min= 132, max= 256, avg=200.90, stdev=35.70, samples=20 00:20:26.745 lat (msec) : 50=9.58%, 100=67.75%, 250=22.67% 00:20:26.745 cpu : usr=43.33%, sys=2.37%, ctx=1449, majf=0, minf=9 00:20:26.745 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:26.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 issued rwts: total=2025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.745 filename0: (groupid=0, jobs=1): err= 0: pid=83561: Mon Jul 15 19:58:18 2024 00:20:26.745 read: IOPS=211, BW=848KiB/s (868kB/s)(8480KiB/10004msec) 00:20:26.745 slat (usec): min=8, max=8033, avg=29.33, stdev=275.40 00:20:26.745 clat (usec): min=1956, max=178886, avg=75368.76, stdev=23366.24 00:20:26.745 lat (usec): min=1964, max=178909, avg=75398.10, stdev=23369.79 00:20:26.745 clat percentiles (msec): 00:20:26.745 | 1.00th=[ 28], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:20:26.745 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 80], 00:20:26.745 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 111], 00:20:26.745 | 99.00th=[ 121], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 180], 00:20:26.745 | 99.99th=[ 180] 00:20:26.745 bw ( KiB/s): min= 512, max= 1048, per=4.39%, avg=840.00, stdev=149.00, samples=19 00:20:26.745 iops : min= 128, max= 262, avg=210.00, stdev=37.25, samples=19 00:20:26.745 lat (msec) : 2=0.14%, 4=0.33%, 10=0.42%, 50=15.99%, 100=65.24% 00:20:26.745 lat (msec) : 250=17.88% 00:20:26.745 cpu : usr=41.51%, sys=2.29%, ctx=1258, majf=0, minf=9 00:20:26.745 IO depths : 1=0.1%, 2=0.4%, 4=1.8%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:26.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.745 filename1: (groupid=0, jobs=1): err= 0: pid=83562: Mon Jul 15 19:58:18 2024 00:20:26.745 read: IOPS=194, BW=778KiB/s (797kB/s)(7784KiB/10005msec) 00:20:26.745 slat (usec): min=4, max=4029, avg=18.93, stdev=91.39 00:20:26.745 clat (msec): min=26, max=207, avg=82.15, stdev=25.56 00:20:26.745 lat (msec): min=26, max=207, avg=82.16, stdev=25.56 00:20:26.745 clat percentiles (msec): 00:20:26.745 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:26.745 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 88], 00:20:26.745 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 121], 00:20:26.745 | 99.00th=[ 146], 99.50th=[ 188], 99.90th=[ 207], 99.95th=[ 207], 00:20:26.745 | 99.99th=[ 207] 00:20:26.745 bw ( KiB/s): min= 400, max= 1024, per=4.05%, avg=775.16, stdev=190.94, samples=19 00:20:26.745 iops : min= 100, max= 256, avg=193.79, stdev=47.73, samples=19 00:20:26.745 lat (msec) : 50=16.14%, 100=56.73%, 250=27.13% 00:20:26.745 cpu : usr=35.29%, sys=1.69%, ctx=983, majf=0, minf=9 00:20:26.745 IO depths : 1=0.1%, 2=2.5%, 4=10.0%, 8=73.1%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:26.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 complete : 0=0.0%, 4=89.6%, 8=8.2%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.745 filename1: (groupid=0, jobs=1): err= 0: pid=83563: Mon Jul 15 19:58:18 2024 00:20:26.745 read: IOPS=192, BW=771KiB/s (790kB/s)(7736KiB/10031msec) 00:20:26.745 slat (usec): min=5, max=8048, avg=35.41, stdev=349.02 00:20:26.745 clat (msec): min=35, max=154, avg=82.73, stdev=21.31 00:20:26.745 lat (msec): min=35, max=154, avg=82.76, stdev=21.31 00:20:26.745 clat percentiles (msec): 00:20:26.745 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 62], 00:20:26.745 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 93], 00:20:26.745 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 115], 00:20:26.745 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 155], 99.95th=[ 155], 00:20:26.745 | 99.99th=[ 155] 00:20:26.745 bw ( KiB/s): min= 528, max= 1048, per=4.02%, avg=769.35, stdev=143.42, samples=20 00:20:26.745 iops : min= 132, max= 262, avg=192.30, stdev=35.90, samples=20 00:20:26.745 lat (msec) : 50=11.01%, 100=64.53%, 250=24.46% 00:20:26.745 cpu : usr=36.73%, sys=1.77%, ctx=1052, majf=0, minf=9 00:20:26.745 IO depths : 1=0.1%, 2=2.3%, 4=9.3%, 8=73.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:26.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 complete : 0=0.0%, 4=89.9%, 8=8.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 issued rwts: total=1934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.745 filename1: (groupid=0, jobs=1): err= 0: pid=83564: Mon Jul 15 19:58:18 2024 00:20:26.745 read: IOPS=194, BW=777KiB/s (795kB/s)(7784KiB/10020msec) 00:20:26.745 slat (usec): min=4, max=4042, avg=21.60, stdev=129.03 00:20:26.745 clat (msec): min=27, max=176, avg=82.25, stdev=23.20 00:20:26.745 lat (msec): min=27, max=176, avg=82.27, stdev=23.20 00:20:26.745 clat percentiles (msec): 00:20:26.745 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:26.745 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 90], 00:20:26.745 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 117], 00:20:26.745 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 178], 99.95th=[ 178], 00:20:26.745 | 99.99th=[ 178] 00:20:26.745 bw ( KiB/s): min= 512, max= 976, per=4.04%, avg=772.00, stdev=163.92, samples=20 00:20:26.745 iops : min= 128, max= 244, avg=193.00, stdev=40.98, samples=20 00:20:26.745 lat (msec) : 50=12.80%, 100=63.51%, 250=23.69% 00:20:26.745 cpu : usr=39.40%, sys=2.28%, ctx=1089, majf=0, minf=9 00:20:26.745 IO depths : 1=0.1%, 2=2.6%, 4=10.2%, 8=72.6%, 16=14.5%, 32=0.0%, >=64=0.0% 00:20:26.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 complete : 0=0.0%, 4=89.9%, 8=7.9%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.745 filename1: (groupid=0, jobs=1): err= 0: pid=83565: Mon Jul 15 19:58:18 2024 00:20:26.745 read: IOPS=213, BW=854KiB/s (875kB/s)(8608KiB/10076msec) 00:20:26.745 slat (usec): min=4, max=8027, avg=21.47, stdev=244.26 00:20:26.745 clat (msec): min=2, max=144, avg=74.73, stdev=26.40 00:20:26.745 lat (msec): min=2, max=144, avg=74.75, stdev=26.40 00:20:26.745 clat percentiles (msec): 00:20:26.745 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 48], 20.00th=[ 57], 00:20:26.745 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:20:26.745 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 109], 00:20:26.745 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:20:26.745 | 99.99th=[ 144] 00:20:26.745 bw ( KiB/s): min= 632, max= 1664, per=4.46%, avg=854.30, stdev=221.99, samples=20 00:20:26.745 iops : min= 158, max= 416, avg=213.55, stdev=55.48, samples=20 00:20:26.745 lat (msec) : 4=1.63%, 10=4.32%, 50=9.94%, 100=66.59%, 250=17.52% 00:20:26.745 cpu : usr=36.18%, sys=1.85%, ctx=1005, majf=0, minf=9 00:20:26.745 IO depths : 1=0.2%, 2=0.7%, 4=2.3%, 8=80.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:26.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 issued rwts: total=2152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.745 filename1: (groupid=0, jobs=1): err= 0: pid=83566: Mon Jul 15 19:58:18 2024 00:20:26.745 read: IOPS=195, BW=783KiB/s (802kB/s)(7840KiB/10012msec) 00:20:26.745 slat (usec): min=3, max=8117, avg=31.31, stdev=302.65 00:20:26.745 clat (msec): min=29, max=190, avg=81.56, stdev=24.83 00:20:26.745 lat (msec): min=29, max=190, avg=81.59, stdev=24.83 00:20:26.745 clat percentiles (msec): 00:20:26.745 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:20:26.745 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 89], 00:20:26.745 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 117], 00:20:26.745 | 99.00th=[ 157], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 190], 00:20:26.745 | 99.99th=[ 190] 00:20:26.745 bw ( KiB/s): min= 496, max= 1024, per=4.08%, avg=780.42, stdev=184.62, samples=19 00:20:26.745 iops : min= 124, max= 256, avg=195.11, stdev=46.16, samples=19 00:20:26.745 lat (msec) : 50=14.95%, 100=56.79%, 250=28.27% 00:20:26.745 cpu : usr=36.44%, sys=1.95%, ctx=1055, majf=0, minf=9 00:20:26.745 IO depths : 1=0.1%, 2=2.2%, 4=9.1%, 8=74.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:26.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.745 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.745 filename1: (groupid=0, jobs=1): err= 0: pid=83567: Mon Jul 15 19:58:18 2024 00:20:26.746 read: IOPS=203, BW=815KiB/s (835kB/s)(8188KiB/10041msec) 00:20:26.746 slat (usec): min=3, max=4025, avg=19.25, stdev=99.50 00:20:26.746 clat (msec): min=35, max=132, avg=78.29, stdev=20.44 00:20:26.746 lat (msec): min=35, max=132, avg=78.31, stdev=20.43 00:20:26.746 clat percentiles (msec): 00:20:26.746 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:26.746 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 83], 00:20:26.746 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 112], 00:20:26.746 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 130], 00:20:26.746 | 99.99th=[ 132] 00:20:26.746 bw ( KiB/s): min= 656, max= 1024, per=4.25%, avg=814.60, stdev=124.15, samples=20 00:20:26.746 iops : min= 164, max= 256, avg=203.60, stdev=31.08, samples=20 00:20:26.746 lat (msec) : 50=11.33%, 100=71.62%, 250=17.05% 00:20:26.746 cpu : usr=38.56%, sys=1.89%, ctx=1392, majf=0, minf=9 00:20:26.746 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:26.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 issued rwts: total=2047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.746 filename1: (groupid=0, jobs=1): err= 0: pid=83568: Mon Jul 15 19:58:18 2024 00:20:26.746 read: IOPS=210, BW=844KiB/s (864kB/s)(8440KiB/10004msec) 00:20:26.746 slat (usec): min=4, max=4031, avg=19.30, stdev=87.89 00:20:26.746 clat (msec): min=4, max=205, avg=75.76, stdev=23.30 00:20:26.746 lat (msec): min=4, max=205, avg=75.78, stdev=23.29 00:20:26.746 clat percentiles (msec): 00:20:26.746 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:20:26.746 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 80], 00:20:26.746 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 112], 00:20:26.746 | 99.00th=[ 123], 99.50th=[ 182], 99.90th=[ 182], 99.95th=[ 205], 00:20:26.746 | 99.99th=[ 205] 00:20:26.746 bw ( KiB/s): min= 512, max= 1080, per=4.40%, avg=841.26, stdev=152.61, samples=19 00:20:26.746 iops : min= 128, max= 270, avg=210.32, stdev=38.15, samples=19 00:20:26.746 lat (msec) : 10=0.33%, 50=16.68%, 100=66.87%, 250=16.11% 00:20:26.746 cpu : usr=42.52%, sys=1.99%, ctx=1336, majf=0, minf=9 00:20:26.746 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:26.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.746 filename1: (groupid=0, jobs=1): err= 0: pid=83569: Mon Jul 15 19:58:18 2024 00:20:26.746 read: IOPS=175, BW=703KiB/s (720kB/s)(7060KiB/10039msec) 00:20:26.746 slat (usec): min=7, max=4022, avg=17.51, stdev=95.65 00:20:26.746 clat (msec): min=26, max=152, avg=90.68, stdev=20.70 00:20:26.746 lat (msec): min=26, max=152, avg=90.70, stdev=20.71 00:20:26.746 clat percentiles (msec): 00:20:26.746 | 1.00th=[ 43], 5.00th=[ 58], 10.00th=[ 69], 20.00th=[ 73], 00:20:26.746 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 96], 00:20:26.746 | 70.00th=[ 104], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 126], 00:20:26.746 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:20:26.746 | 99.99th=[ 153] 00:20:26.746 bw ( KiB/s): min= 512, max= 864, per=3.67%, avg=702.20, stdev=93.13, samples=20 00:20:26.746 iops : min= 128, max= 216, avg=175.50, stdev=23.27, samples=20 00:20:26.746 lat (msec) : 50=2.55%, 100=63.80%, 250=33.65% 00:20:26.746 cpu : usr=42.60%, sys=2.06%, ctx=1330, majf=0, minf=9 00:20:26.746 IO depths : 1=0.1%, 2=5.0%, 4=20.1%, 8=61.6%, 16=13.3%, 32=0.0%, >=64=0.0% 00:20:26.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 complete : 0=0.0%, 4=93.0%, 8=2.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 issued rwts: total=1765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.746 filename2: (groupid=0, jobs=1): err= 0: pid=83570: Mon Jul 15 19:58:18 2024 00:20:26.746 read: IOPS=202, BW=810KiB/s (830kB/s)(8136KiB/10040msec) 00:20:26.746 slat (usec): min=5, max=6024, avg=28.28, stdev=239.22 00:20:26.746 clat (msec): min=35, max=146, avg=78.75, stdev=22.62 00:20:26.746 lat (msec): min=35, max=146, avg=78.78, stdev=22.62 00:20:26.746 clat percentiles (msec): 00:20:26.746 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 56], 00:20:26.746 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:20:26.746 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 117], 00:20:26.746 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:26.746 | 99.99th=[ 146] 00:20:26.746 bw ( KiB/s): min= 576, max= 1000, per=4.23%, avg=809.40, stdev=144.39, samples=20 00:20:26.746 iops : min= 144, max= 250, avg=202.35, stdev=36.10, samples=20 00:20:26.746 lat (msec) : 50=12.14%, 100=68.44%, 250=19.42% 00:20:26.746 cpu : usr=40.84%, sys=2.00%, ctx=1194, majf=0, minf=9 00:20:26.746 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:26.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.746 filename2: (groupid=0, jobs=1): err= 0: pid=83571: Mon Jul 15 19:58:18 2024 00:20:26.746 read: IOPS=209, BW=838KiB/s (858kB/s)(8384KiB/10002msec) 00:20:26.746 slat (usec): min=3, max=8037, avg=26.95, stdev=229.32 00:20:26.746 clat (msec): min=3, max=202, avg=76.25, stdev=24.15 00:20:26.746 lat (msec): min=3, max=202, avg=76.28, stdev=24.15 00:20:26.746 clat percentiles (msec): 00:20:26.746 | 1.00th=[ 27], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:20:26.746 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:20:26.746 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 112], 00:20:26.746 | 99.00th=[ 122], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 203], 00:20:26.746 | 99.99th=[ 203] 00:20:26.746 bw ( KiB/s): min= 504, max= 1080, per=4.35%, avg=832.42, stdev=148.92, samples=19 00:20:26.746 iops : min= 126, max= 270, avg=208.11, stdev=37.23, samples=19 00:20:26.746 lat (msec) : 4=0.76%, 10=0.14%, 50=13.79%, 100=67.89%, 250=17.41% 00:20:26.746 cpu : usr=38.82%, sys=1.97%, ctx=1322, majf=0, minf=9 00:20:26.746 IO depths : 1=0.1%, 2=0.1%, 4=0.6%, 8=83.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:26.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 issued rwts: total=2096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.746 filename2: (groupid=0, jobs=1): err= 0: pid=83572: Mon Jul 15 19:58:18 2024 00:20:26.746 read: IOPS=208, BW=836KiB/s (856kB/s)(8428KiB/10082msec) 00:20:26.746 slat (usec): min=6, max=9033, avg=20.36, stdev=215.18 00:20:26.746 clat (usec): min=1720, max=155458, avg=76324.31, stdev=30507.68 00:20:26.746 lat (usec): min=1732, max=155476, avg=76344.67, stdev=30519.06 00:20:26.746 clat percentiles (msec): 00:20:26.746 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 45], 20.00th=[ 55], 00:20:26.746 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 84], 00:20:26.746 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 115], 00:20:26.746 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 157], 00:20:26.746 | 99.99th=[ 157] 00:20:26.746 bw ( KiB/s): min= 528, max= 2142, per=4.36%, avg=835.20, stdev=336.39, samples=20 00:20:26.746 iops : min= 132, max= 535, avg=208.75, stdev=83.98, samples=20 00:20:26.746 lat (msec) : 2=0.76%, 4=3.04%, 10=4.56%, 50=7.36%, 100=58.90% 00:20:26.746 lat (msec) : 250=25.39% 00:20:26.746 cpu : usr=44.78%, sys=2.27%, ctx=1620, majf=0, minf=0 00:20:26.746 IO depths : 1=0.2%, 2=2.6%, 4=9.6%, 8=72.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:26.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 complete : 0=0.0%, 4=90.1%, 8=7.8%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.746 filename2: (groupid=0, jobs=1): err= 0: pid=83573: Mon Jul 15 19:58:18 2024 00:20:26.746 read: IOPS=209, BW=838KiB/s (858kB/s)(8380KiB/10004msec) 00:20:26.746 slat (usec): min=4, max=8027, avg=26.43, stdev=274.53 00:20:26.746 clat (usec): min=1929, max=207230, avg=76285.41, stdev=26132.56 00:20:26.746 lat (usec): min=1938, max=207244, avg=76311.84, stdev=26130.52 00:20:26.746 clat percentiles (msec): 00:20:26.746 | 1.00th=[ 5], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:20:26.746 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 82], 00:20:26.746 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 120], 00:20:26.746 | 99.00th=[ 144], 99.50th=[ 184], 99.90th=[ 184], 99.95th=[ 207], 00:20:26.746 | 99.99th=[ 207] 00:20:26.746 bw ( KiB/s): min= 400, max= 1072, per=4.33%, avg=828.63, stdev=171.05, samples=19 00:20:26.746 iops : min= 100, max= 268, avg=207.16, stdev=42.76, samples=19 00:20:26.746 lat (msec) : 2=0.14%, 4=0.76%, 10=0.14%, 50=17.66%, 100=62.10% 00:20:26.746 lat (msec) : 250=19.19% 00:20:26.746 cpu : usr=31.55%, sys=1.66%, ctx=921, majf=0, minf=9 00:20:26.746 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=82.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:26.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.746 filename2: (groupid=0, jobs=1): err= 0: pid=83574: Mon Jul 15 19:58:18 2024 00:20:26.746 read: IOPS=204, BW=818KiB/s (837kB/s)(8212KiB/10041msec) 00:20:26.746 slat (usec): min=3, max=12033, avg=41.75, stdev=484.30 00:20:26.746 clat (msec): min=34, max=161, avg=78.03, stdev=21.82 00:20:26.746 lat (msec): min=34, max=161, avg=78.08, stdev=21.83 00:20:26.746 clat percentiles (msec): 00:20:26.746 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:26.746 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:20:26.746 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 112], 00:20:26.746 | 99.00th=[ 125], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 163], 00:20:26.746 | 99.99th=[ 163] 00:20:26.746 bw ( KiB/s): min= 512, max= 976, per=4.25%, avg=814.80, stdev=130.82, samples=20 00:20:26.746 iops : min= 128, max= 244, avg=203.70, stdev=32.70, samples=20 00:20:26.746 lat (msec) : 50=13.05%, 100=69.22%, 250=17.73% 00:20:26.746 cpu : usr=32.18%, sys=1.43%, ctx=913, majf=0, minf=9 00:20:26.746 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:26.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.746 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.747 filename2: (groupid=0, jobs=1): err= 0: pid=83575: Mon Jul 15 19:58:18 2024 00:20:26.747 read: IOPS=191, BW=765KiB/s (783kB/s)(7704KiB/10071msec) 00:20:26.747 slat (usec): min=4, max=8031, avg=20.37, stdev=188.64 00:20:26.747 clat (msec): min=22, max=144, avg=83.45, stdev=22.97 00:20:26.747 lat (msec): min=22, max=144, avg=83.47, stdev=22.97 00:20:26.747 clat percentiles (msec): 00:20:26.747 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 63], 00:20:26.747 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 93], 00:20:26.747 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 120], 00:20:26.747 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 146], 00:20:26.747 | 99.99th=[ 146] 00:20:26.747 bw ( KiB/s): min= 528, max= 1024, per=3.99%, avg=764.05, stdev=137.33, samples=20 00:20:26.747 iops : min= 132, max= 256, avg=191.00, stdev=34.32, samples=20 00:20:26.747 lat (msec) : 50=11.42%, 100=62.41%, 250=26.17% 00:20:26.747 cpu : usr=31.91%, sys=1.61%, ctx=925, majf=0, minf=9 00:20:26.747 IO depths : 1=0.1%, 2=2.3%, 4=9.3%, 8=72.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:26.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.747 complete : 0=0.0%, 4=90.1%, 8=7.8%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.747 issued rwts: total=1926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.747 filename2: (groupid=0, jobs=1): err= 0: pid=83576: Mon Jul 15 19:58:18 2024 00:20:26.747 read: IOPS=204, BW=819KiB/s (838kB/s)(8220KiB/10042msec) 00:20:26.747 slat (usec): min=5, max=8037, avg=27.10, stdev=306.06 00:20:26.747 clat (msec): min=20, max=132, avg=77.98, stdev=20.34 00:20:26.747 lat (msec): min=20, max=132, avg=78.00, stdev=20.34 00:20:26.747 clat percentiles (msec): 00:20:26.747 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:26.747 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:20:26.747 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 109], 00:20:26.747 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 131], 00:20:26.747 | 99.99th=[ 132] 00:20:26.747 bw ( KiB/s): min= 656, max= 992, per=4.27%, avg=817.75, stdev=109.52, samples=20 00:20:26.747 iops : min= 164, max= 248, avg=204.40, stdev=27.43, samples=20 00:20:26.747 lat (msec) : 50=13.28%, 100=70.71%, 250=16.01% 00:20:26.747 cpu : usr=31.43%, sys=1.52%, ctx=899, majf=0, minf=9 00:20:26.747 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:26.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.747 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.747 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.747 filename2: (groupid=0, jobs=1): err= 0: pid=83577: Mon Jul 15 19:58:18 2024 00:20:26.747 read: IOPS=189, BW=757KiB/s (776kB/s)(7604KiB/10039msec) 00:20:26.747 slat (usec): min=5, max=8066, avg=32.35, stdev=322.16 00:20:26.747 clat (msec): min=26, max=155, avg=84.20, stdev=22.16 00:20:26.747 lat (msec): min=26, max=155, avg=84.24, stdev=22.16 00:20:26.747 clat percentiles (msec): 00:20:26.747 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 67], 00:20:26.747 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 93], 00:20:26.747 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 109], 95.00th=[ 120], 00:20:26.747 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:20:26.747 | 99.99th=[ 155] 00:20:26.747 bw ( KiB/s): min= 528, max= 968, per=3.95%, avg=756.60, stdev=137.68, samples=20 00:20:26.747 iops : min= 132, max= 242, avg=189.15, stdev=34.42, samples=20 00:20:26.747 lat (msec) : 50=9.10%, 100=65.33%, 250=25.57% 00:20:26.747 cpu : usr=36.83%, sys=1.91%, ctx=1131, majf=0, minf=9 00:20:26.747 IO depths : 1=0.1%, 2=2.3%, 4=9.2%, 8=73.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:26.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.747 complete : 0=0.0%, 4=90.2%, 8=7.8%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.747 issued rwts: total=1901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:26.747 00:20:26.747 Run status group 0 (all jobs): 00:20:26.747 READ: bw=18.7MiB/s (19.6MB/s), 703KiB/s-854KiB/s (720kB/s-875kB/s), io=188MiB (198MB), run=10002-10082msec 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 bdev_null0 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 [2024-07-15 19:58:19.140131] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 bdev_null1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.747 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:26.748 { 00:20:26.748 "params": { 00:20:26.748 "name": "Nvme$subsystem", 00:20:26.748 "trtype": "$TEST_TRANSPORT", 00:20:26.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.748 "adrfam": "ipv4", 00:20:26.748 "trsvcid": "$NVMF_PORT", 00:20:26.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.748 "hdgst": ${hdgst:-false}, 00:20:26.748 "ddgst": ${ddgst:-false} 00:20:26.748 }, 00:20:26.748 "method": "bdev_nvme_attach_controller" 00:20:26.748 } 00:20:26.748 EOF 00:20:26.748 )") 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:26.748 { 00:20:26.748 "params": { 00:20:26.748 "name": "Nvme$subsystem", 00:20:26.748 "trtype": "$TEST_TRANSPORT", 00:20:26.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.748 "adrfam": "ipv4", 00:20:26.748 "trsvcid": "$NVMF_PORT", 00:20:26.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.748 "hdgst": ${hdgst:-false}, 00:20:26.748 "ddgst": ${ddgst:-false} 00:20:26.748 }, 00:20:26.748 "method": "bdev_nvme_attach_controller" 00:20:26.748 } 00:20:26.748 EOF 00:20:26.748 )") 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:26.748 "params": { 00:20:26.748 "name": "Nvme0", 00:20:26.748 "trtype": "tcp", 00:20:26.748 "traddr": "10.0.0.2", 00:20:26.748 "adrfam": "ipv4", 00:20:26.748 "trsvcid": "4420", 00:20:26.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:26.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:26.748 "hdgst": false, 00:20:26.748 "ddgst": false 00:20:26.748 }, 00:20:26.748 "method": "bdev_nvme_attach_controller" 00:20:26.748 },{ 00:20:26.748 "params": { 00:20:26.748 "name": "Nvme1", 00:20:26.748 "trtype": "tcp", 00:20:26.748 "traddr": "10.0.0.2", 00:20:26.748 "adrfam": "ipv4", 00:20:26.748 "trsvcid": "4420", 00:20:26.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.748 "hdgst": false, 00:20:26.748 "ddgst": false 00:20:26.748 }, 00:20:26.748 "method": "bdev_nvme_attach_controller" 00:20:26.748 }' 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:26.748 19:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:26.748 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:26.748 ... 00:20:26.748 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:26.748 ... 00:20:26.748 fio-3.35 00:20:26.748 Starting 4 threads 00:20:30.930 00:20:30.930 filename0: (groupid=0, jobs=1): err= 0: pid=83725: Mon Jul 15 19:58:24 2024 00:20:30.930 read: IOPS=2019, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5002msec) 00:20:30.930 slat (nsec): min=7772, max=54082, avg=14438.42, stdev=3227.56 00:20:30.930 clat (usec): min=1404, max=6277, avg=3904.85, stdev=177.78 00:20:30.930 lat (usec): min=1415, max=6294, avg=3919.29, stdev=177.89 00:20:30.930 clat percentiles (usec): 00:20:30.930 | 1.00th=[ 3752], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3818], 00:20:30.930 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3884], 60.00th=[ 3916], 00:20:30.930 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4015], 95.00th=[ 4047], 00:20:30.930 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5473], 99.95th=[ 5473], 00:20:30.930 | 99.99th=[ 5604] 00:20:30.930 bw ( KiB/s): min=15503, max=16384, per=24.92%, avg=16158.11, stdev=265.21, samples=9 00:20:30.930 iops : min= 1937, max= 2048, avg=2019.67, stdev=33.42, samples=9 00:20:30.930 lat (msec) : 2=0.07%, 4=88.12%, 10=11.81% 00:20:30.930 cpu : usr=91.66%, sys=7.52%, ctx=160, majf=0, minf=0 00:20:30.930 IO depths : 1=0.1%, 2=24.9%, 4=50.1%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.930 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.930 issued rwts: total=10103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:30.930 filename0: (groupid=0, jobs=1): err= 0: pid=83726: Mon Jul 15 19:58:24 2024 00:20:30.930 read: IOPS=2041, BW=15.9MiB/s (16.7MB/s)(79.8MiB/5002msec) 00:20:30.930 slat (nsec): min=7370, max=55337, avg=13126.60, stdev=3801.56 00:20:30.930 clat (usec): min=1049, max=7014, avg=3870.39, stdev=271.83 00:20:30.930 lat (usec): min=1058, max=7032, avg=3883.51, stdev=271.92 00:20:30.930 clat percentiles (usec): 00:20:30.930 | 1.00th=[ 2278], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3818], 00:20:30.930 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3884], 60.00th=[ 3916], 00:20:30.930 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4015], 95.00th=[ 4047], 00:20:30.930 | 99.00th=[ 4178], 99.50th=[ 4424], 99.90th=[ 4817], 99.95th=[ 4883], 00:20:30.930 | 99.99th=[ 6325] 00:20:30.930 bw ( KiB/s): min=16128, max=17312, per=25.23%, avg=16359.11, stdev=371.04, samples=9 00:20:30.930 iops : min= 2016, max= 2164, avg=2044.89, stdev=46.38, samples=9 00:20:30.930 lat (msec) : 2=0.83%, 4=87.96%, 10=11.20% 00:20:30.930 cpu : usr=92.28%, sys=6.96%, ctx=5, majf=0, minf=9 00:20:30.930 IO depths : 1=0.1%, 2=24.0%, 4=50.6%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.930 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.930 issued rwts: total=10210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:30.930 filename1: (groupid=0, jobs=1): err= 0: pid=83727: Mon Jul 15 19:58:24 2024 00:20:30.930 read: IOPS=2027, BW=15.8MiB/s (16.6MB/s)(79.2MiB/5001msec) 00:20:30.930 slat (nsec): min=7330, max=56050, avg=13958.46, stdev=4016.43 00:20:30.930 clat (usec): min=1017, max=8864, avg=3888.39, stdev=212.94 00:20:30.930 lat (usec): min=1026, max=8899, avg=3902.35, stdev=213.24 00:20:30.930 clat percentiles (usec): 00:20:30.930 | 1.00th=[ 3425], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3818], 00:20:30.930 | 30.00th=[ 3851], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3884], 00:20:30.930 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4015], 95.00th=[ 4047], 00:20:30.930 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 4948], 99.95th=[ 6325], 00:20:30.930 | 99.99th=[ 7242] 00:20:30.930 bw ( KiB/s): min=16128, max=16384, per=25.04%, avg=16231.00, stdev=87.23, samples=9 00:20:30.930 iops : min= 2016, max= 2048, avg=2028.78, stdev=10.84, samples=9 00:20:30.930 lat (msec) : 2=0.31%, 4=88.68%, 10=11.02% 00:20:30.930 cpu : usr=92.46%, sys=6.78%, ctx=9, majf=0, minf=9 00:20:30.930 IO depths : 1=0.1%, 2=24.5%, 4=50.3%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.930 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.930 issued rwts: total=10140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:30.930 filename1: (groupid=0, jobs=1): err= 0: pid=83728: Mon Jul 15 19:58:24 2024 00:20:30.930 read: IOPS=2015, BW=15.7MiB/s (16.5MB/s)(78.8MiB/5001msec) 00:20:30.930 slat (nsec): min=7260, max=56648, avg=14865.49, stdev=3233.40 00:20:30.930 clat (usec): min=2035, max=7048, avg=3910.20, stdev=193.12 00:20:30.930 lat (usec): min=2049, max=7075, avg=3925.06, stdev=193.31 00:20:30.930 clat percentiles (usec): 00:20:30.930 | 1.00th=[ 3752], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3818], 00:20:30.930 | 30.00th=[ 3851], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3916], 00:20:30.930 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4015], 95.00th=[ 4047], 00:20:30.930 | 99.00th=[ 4752], 99.50th=[ 5145], 99.90th=[ 5997], 99.95th=[ 6849], 00:20:30.930 | 99.99th=[ 6915] 00:20:30.930 bw ( KiB/s): min=15232, max=16272, per=24.86%, avg=16113.78, stdev=335.38, samples=9 00:20:30.930 iops : min= 1904, max= 2034, avg=2014.22, stdev=41.92, samples=9 00:20:30.930 lat (msec) : 4=88.38%, 10=11.62% 00:20:30.930 cpu : usr=92.10%, sys=7.12%, ctx=7, majf=0, minf=0 00:20:30.930 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.930 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.930 issued rwts: total=10080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:30.930 00:20:30.930 Run status group 0 (all jobs): 00:20:30.930 READ: bw=63.3MiB/s (66.4MB/s), 15.7MiB/s-15.9MiB/s (16.5MB/s-16.7MB/s), io=317MiB (332MB), run=5001-5002msec 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.189 ************************************ 00:20:31.189 END TEST fio_dif_rand_params 00:20:31.189 ************************************ 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.189 00:20:31.189 real 0m23.587s 00:20:31.189 user 2m4.460s 00:20:31.189 sys 0m8.014s 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.189 19:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.189 19:58:25 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:31.189 19:58:25 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:31.189 19:58:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:31.190 19:58:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.190 19:58:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:31.190 ************************************ 00:20:31.190 START TEST fio_dif_digest 00:20:31.190 ************************************ 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:31.190 bdev_null0 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:31.190 [2024-07-15 19:58:25.344724] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.190 { 00:20:31.190 "params": { 00:20:31.190 "name": "Nvme$subsystem", 00:20:31.190 "trtype": "$TEST_TRANSPORT", 00:20:31.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.190 "adrfam": "ipv4", 00:20:31.190 "trsvcid": "$NVMF_PORT", 00:20:31.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.190 "hdgst": ${hdgst:-false}, 00:20:31.190 "ddgst": ${ddgst:-false} 00:20:31.190 }, 00:20:31.190 "method": "bdev_nvme_attach_controller" 00:20:31.190 } 00:20:31.190 EOF 00:20:31.190 )") 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:31.190 "params": { 00:20:31.190 "name": "Nvme0", 00:20:31.190 "trtype": "tcp", 00:20:31.190 "traddr": "10.0.0.2", 00:20:31.190 "adrfam": "ipv4", 00:20:31.190 "trsvcid": "4420", 00:20:31.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:31.190 "hdgst": true, 00:20:31.190 "ddgst": true 00:20:31.190 }, 00:20:31.190 "method": "bdev_nvme_attach_controller" 00:20:31.190 }' 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:31.190 19:58:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.449 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:31.449 ... 00:20:31.449 fio-3.35 00:20:31.449 Starting 3 threads 00:20:43.651 00:20:43.651 filename0: (groupid=0, jobs=1): err= 0: pid=83834: Mon Jul 15 19:58:36 2024 00:20:43.651 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10001msec) 00:20:43.651 slat (nsec): min=7177, max=46609, avg=16169.31, stdev=5273.10 00:20:43.651 clat (usec): min=11677, max=14834, avg=13245.06, stdev=469.58 00:20:43.651 lat (usec): min=11690, max=14856, avg=13261.23, stdev=470.13 00:20:43.651 clat percentiles (usec): 00:20:43.651 | 1.00th=[11863], 5.00th=[12387], 10.00th=[12649], 20.00th=[12780], 00:20:43.651 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:20:43.651 | 70.00th=[13566], 80.00th=[13566], 90.00th=[13698], 95.00th=[13829], 00:20:43.651 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14877], 99.95th=[14877], 00:20:43.651 | 99.99th=[14877] 00:20:43.651 bw ( KiB/s): min=27648, max=30720, per=33.32%, avg=28901.05, stdev=857.14, samples=19 00:20:43.651 iops : min= 216, max= 240, avg=225.79, stdev= 6.70, samples=19 00:20:43.651 lat (msec) : 20=100.00% 00:20:43.651 cpu : usr=90.87%, sys=8.61%, ctx=61, majf=0, minf=0 00:20:43.651 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.651 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.651 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:43.651 filename0: (groupid=0, jobs=1): err= 0: pid=83835: Mon Jul 15 19:58:36 2024 00:20:43.651 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10002msec) 00:20:43.651 slat (nsec): min=6851, max=50499, avg=15190.54, stdev=5630.76 00:20:43.651 clat (usec): min=11645, max=16399, avg=13249.14, stdev=480.60 00:20:43.651 lat (usec): min=11652, max=16424, avg=13264.33, stdev=481.24 00:20:43.651 clat percentiles (usec): 00:20:43.651 | 1.00th=[11863], 5.00th=[12387], 10.00th=[12649], 20.00th=[12780], 00:20:43.651 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:20:43.651 | 70.00th=[13566], 80.00th=[13566], 90.00th=[13698], 95.00th=[13829], 00:20:43.651 | 99.00th=[14091], 99.50th=[14353], 99.90th=[16319], 99.95th=[16450], 00:20:43.651 | 99.99th=[16450] 00:20:43.651 bw ( KiB/s): min=27648, max=30720, per=33.32%, avg=28901.05, stdev=733.54, samples=19 00:20:43.651 iops : min= 216, max= 240, avg=225.79, stdev= 5.73, samples=19 00:20:43.651 lat (msec) : 20=100.00% 00:20:43.651 cpu : usr=91.15%, sys=8.31%, ctx=28, majf=0, minf=0 00:20:43.651 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.651 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.651 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:43.651 filename0: (groupid=0, jobs=1): err= 0: pid=83836: Mon Jul 15 19:58:36 2024 00:20:43.651 read: IOPS=226, BW=28.3MiB/s (29.6MB/s)(283MiB/10005msec) 00:20:43.651 slat (nsec): min=6925, max=50247, avg=16380.28, stdev=5383.97 00:20:43.651 clat (usec): min=4969, max=14646, avg=13232.05, stdev=554.46 00:20:43.651 lat (usec): min=4975, max=14665, avg=13248.43, stdev=554.99 00:20:43.651 clat percentiles (usec): 00:20:43.651 | 1.00th=[11863], 5.00th=[12387], 10.00th=[12649], 20.00th=[12780], 00:20:43.651 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:20:43.651 | 70.00th=[13566], 80.00th=[13566], 90.00th=[13698], 95.00th=[13829], 00:20:43.651 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14615], 99.95th=[14615], 00:20:43.651 | 99.99th=[14615] 00:20:43.651 bw ( KiB/s): min=27648, max=30720, per=33.32%, avg=28904.00, stdev=855.48, samples=19 00:20:43.651 iops : min= 216, max= 240, avg=225.79, stdev= 6.70, samples=19 00:20:43.651 lat (msec) : 10=0.13%, 20=99.87% 00:20:43.651 cpu : usr=90.76%, sys=8.69%, ctx=254, majf=0, minf=9 00:20:43.651 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.651 issued rwts: total=2262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.651 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:43.651 00:20:43.651 Run status group 0 (all jobs): 00:20:43.651 READ: bw=84.7MiB/s (88.8MB/s), 28.2MiB/s-28.3MiB/s (29.6MB/s-29.6MB/s), io=848MiB (889MB), run=10001-10005msec 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:43.651 ************************************ 00:20:43.651 END TEST fio_dif_digest 00:20:43.651 ************************************ 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.651 00:20:43.651 real 0m11.036s 00:20:43.651 user 0m27.938s 00:20:43.651 sys 0m2.838s 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.651 19:58:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:43.651 19:58:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:43.651 19:58:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:43.651 19:58:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:43.651 19:58:36 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.651 19:58:36 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:20:43.651 19:58:36 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.651 19:58:36 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:20:43.651 19:58:36 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.651 19:58:36 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.651 rmmod nvme_tcp 00:20:43.651 rmmod nvme_fabrics 00:20:43.651 rmmod nvme_keyring 00:20:43.651 19:58:36 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.651 19:58:36 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:20:43.652 19:58:36 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:20:43.652 19:58:36 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83073 ']' 00:20:43.652 19:58:36 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83073 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83073 ']' 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83073 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83073 00:20:43.652 killing process with pid 83073 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83073' 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83073 00:20:43.652 19:58:36 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83073 00:20:43.652 19:58:36 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:43.652 19:58:36 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:43.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:43.652 Waiting for block devices as requested 00:20:43.652 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:43.652 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:43.652 19:58:37 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:43.652 19:58:37 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:43.652 19:58:37 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.652 19:58:37 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.652 19:58:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.652 19:58:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:43.652 19:58:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.652 19:58:37 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:43.652 ************************************ 00:20:43.652 END TEST nvmf_dif 00:20:43.652 ************************************ 00:20:43.652 00:20:43.652 real 0m59.861s 00:20:43.652 user 3m48.479s 00:20:43.652 sys 0m19.308s 00:20:43.652 19:58:37 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.652 19:58:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:43.652 19:58:37 -- common/autotest_common.sh@1142 -- # return 0 00:20:43.652 19:58:37 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:43.652 19:58:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:43.652 19:58:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:43.652 19:58:37 -- common/autotest_common.sh@10 -- # set +x 00:20:43.652 ************************************ 00:20:43.652 START TEST nvmf_abort_qd_sizes 00:20:43.652 ************************************ 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:43.652 * Looking for test storage... 00:20:43.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:43.652 Cannot find device "nvmf_tgt_br" 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.652 Cannot find device "nvmf_tgt_br2" 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:43.652 Cannot find device "nvmf_tgt_br" 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:43.652 Cannot find device "nvmf_tgt_br2" 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:43.652 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:43.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:43.653 00:20:43.653 --- 10.0.0.2 ping statistics --- 00:20:43.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.653 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:43.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:20:43.653 00:20:43.653 --- 10.0.0.3 ping statistics --- 00:20:43.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.653 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:43.653 00:20:43.653 --- 10.0.0.1 ping statistics --- 00:20:43.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.653 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:43.653 19:58:37 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:44.220 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:44.478 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:44.478 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:44.478 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.478 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.478 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.478 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.478 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.478 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84429 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84429 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84429 ']' 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.735 19:58:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:44.735 [2024-07-15 19:58:38.789162] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:20:44.735 [2024-07-15 19:58:38.789245] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.735 [2024-07-15 19:58:38.932655] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.992 [2024-07-15 19:58:39.060819] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.992 [2024-07-15 19:58:39.060882] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.992 [2024-07-15 19:58:39.060895] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.992 [2024-07-15 19:58:39.060906] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.992 [2024-07-15 19:58:39.060915] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.992 [2024-07-15 19:58:39.061004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.992 [2024-07-15 19:58:39.061463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.992 [2024-07-15 19:58:39.062093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.992 [2024-07-15 19:58:39.062105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.992 [2024-07-15 19:58:39.139051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:45.927 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.928 19:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:45.928 ************************************ 00:20:45.928 START TEST spdk_target_abort 00:20:45.928 ************************************ 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.928 spdk_targetn1 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.928 [2024-07-15 19:58:39.963713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.928 [2024-07-15 19:58:39.995841] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:45.928 19:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:45.928 19:58:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:49.210 Initializing NVMe Controllers 00:20:49.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:49.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:49.210 Initialization complete. Launching workers. 00:20:49.210 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10204, failed: 0 00:20:49.210 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1023, failed to submit 9181 00:20:49.210 success 845, unsuccess 178, failed 0 00:20:49.210 19:58:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:49.210 19:58:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:52.518 Initializing NVMe Controllers 00:20:52.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:52.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:52.518 Initialization complete. Launching workers. 00:20:52.518 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8934, failed: 0 00:20:52.518 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1164, failed to submit 7770 00:20:52.518 success 371, unsuccess 793, failed 0 00:20:52.518 19:58:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:52.519 19:58:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:55.816 Initializing NVMe Controllers 00:20:55.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:55.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:55.816 Initialization complete. Launching workers. 00:20:55.816 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31888, failed: 0 00:20:55.816 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2318, failed to submit 29570 00:20:55.816 success 445, unsuccess 1873, failed 0 00:20:55.816 19:58:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:55.816 19:58:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.816 19:58:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:55.816 19:58:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.816 19:58:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:55.816 19:58:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.816 19:58:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84429 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84429 ']' 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84429 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84429 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.382 killing process with pid 84429 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84429' 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84429 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84429 00:20:56.382 00:20:56.382 real 0m10.738s 00:20:56.382 user 0m43.202s 00:20:56.382 sys 0m2.125s 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:56.382 19:58:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:56.382 ************************************ 00:20:56.382 END TEST spdk_target_abort 00:20:56.382 ************************************ 00:20:56.641 19:58:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:56.641 19:58:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:56.641 19:58:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:56.641 19:58:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.641 19:58:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:56.641 ************************************ 00:20:56.641 START TEST kernel_target_abort 00:20:56.641 ************************************ 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:56.641 19:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:56.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:56.899 Waiting for block devices as requested 00:20:56.899 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.158 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:57.158 No valid GPT data, bailing 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:57.158 No valid GPT data, bailing 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:57.158 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:57.159 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:57.159 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:57.159 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:57.159 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:57.159 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:57.417 No valid GPT data, bailing 00:20:57.417 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:57.417 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:57.417 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:57.417 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:57.417 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:57.417 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:57.417 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:57.417 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:57.418 No valid GPT data, bailing 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c --hostid=f7fce926-7bf5-4841-86b1-6d78480abc2c -a 10.0.0.1 -t tcp -s 4420 00:20:57.418 00:20:57.418 Discovery Log Number of Records 2, Generation counter 2 00:20:57.418 =====Discovery Log Entry 0====== 00:20:57.418 trtype: tcp 00:20:57.418 adrfam: ipv4 00:20:57.418 subtype: current discovery subsystem 00:20:57.418 treq: not specified, sq flow control disable supported 00:20:57.418 portid: 1 00:20:57.418 trsvcid: 4420 00:20:57.418 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:57.418 traddr: 10.0.0.1 00:20:57.418 eflags: none 00:20:57.418 sectype: none 00:20:57.418 =====Discovery Log Entry 1====== 00:20:57.418 trtype: tcp 00:20:57.418 adrfam: ipv4 00:20:57.418 subtype: nvme subsystem 00:20:57.418 treq: not specified, sq flow control disable supported 00:20:57.418 portid: 1 00:20:57.418 trsvcid: 4420 00:20:57.418 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:57.418 traddr: 10.0.0.1 00:20:57.418 eflags: none 00:20:57.418 sectype: none 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:57.418 19:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:00.704 Initializing NVMe Controllers 00:21:00.704 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:00.704 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:00.704 Initialization complete. Launching workers. 00:21:00.704 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31537, failed: 0 00:21:00.704 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31537, failed to submit 0 00:21:00.704 success 0, unsuccess 31537, failed 0 00:21:00.704 19:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:00.704 19:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:04.012 Initializing NVMe Controllers 00:21:04.012 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:04.012 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:04.012 Initialization complete. Launching workers. 00:21:04.012 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65374, failed: 0 00:21:04.012 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26815, failed to submit 38559 00:21:04.012 success 0, unsuccess 26815, failed 0 00:21:04.012 19:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:04.012 19:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:07.299 Initializing NVMe Controllers 00:21:07.299 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:07.299 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:07.299 Initialization complete. Launching workers. 00:21:07.299 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72462, failed: 0 00:21:07.299 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18074, failed to submit 54388 00:21:07.299 success 0, unsuccess 18074, failed 0 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:07.299 19:59:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:07.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:09.244 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:09.244 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:09.244 00:21:09.244 real 0m12.704s 00:21:09.244 user 0m5.806s 00:21:09.244 sys 0m4.078s 00:21:09.244 19:59:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:09.244 19:59:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:09.244 ************************************ 00:21:09.244 END TEST kernel_target_abort 00:21:09.244 ************************************ 00:21:09.244 19:59:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:09.244 19:59:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:09.244 19:59:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:09.244 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.244 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:09.244 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.244 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:09.244 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.244 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.244 rmmod nvme_tcp 00:21:09.244 rmmod nvme_fabrics 00:21:09.244 rmmod nvme_keyring 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84429 ']' 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84429 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84429 ']' 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84429 00:21:09.502 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84429) - No such process 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84429 is not found' 00:21:09.502 Process with pid 84429 is not found 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:09.502 19:59:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:09.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:09.760 Waiting for block devices as requested 00:21:09.760 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:10.020 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:10.020 00:21:10.020 real 0m26.750s 00:21:10.020 user 0m50.166s 00:21:10.020 sys 0m7.506s 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:10.020 19:59:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:10.020 ************************************ 00:21:10.020 END TEST nvmf_abort_qd_sizes 00:21:10.020 ************************************ 00:21:10.020 19:59:04 -- common/autotest_common.sh@1142 -- # return 0 00:21:10.020 19:59:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:10.020 19:59:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:10.020 19:59:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.020 19:59:04 -- common/autotest_common.sh@10 -- # set +x 00:21:10.020 ************************************ 00:21:10.020 START TEST keyring_file 00:21:10.020 ************************************ 00:21:10.020 19:59:04 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:10.020 * Looking for test storage... 00:21:10.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:10.020 19:59:04 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:10.020 19:59:04 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:21:10.020 19:59:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:10.021 19:59:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.021 19:59:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.021 19:59:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.021 19:59:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.021 19:59:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.021 19:59:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.021 19:59:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:10.021 19:59:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.021 19:59:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:10.021 19:59:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:10.021 19:59:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:10.021 19:59:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:10.021 19:59:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:10.021 19:59:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:10.021 19:59:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:10.021 19:59:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:10.021 19:59:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:10.021 19:59:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:10.021 19:59:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:10.021 19:59:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:10.021 19:59:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VqIavnpRtW 00:21:10.021 19:59:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:10.021 19:59:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VqIavnpRtW 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VqIavnpRtW 00:21:10.280 19:59:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VqIavnpRtW 00:21:10.280 19:59:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Hl7c88wKBx 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:10.280 19:59:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:10.280 19:59:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:10.280 19:59:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:10.280 19:59:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:10.280 19:59:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:10.280 19:59:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Hl7c88wKBx 00:21:10.280 19:59:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Hl7c88wKBx 00:21:10.280 19:59:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Hl7c88wKBx 00:21:10.280 19:59:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=85296 00:21:10.280 19:59:04 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:10.280 19:59:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85296 00:21:10.280 19:59:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85296 ']' 00:21:10.280 19:59:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.280 19:59:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.280 19:59:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.280 19:59:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.280 19:59:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:10.280 [2024-07-15 19:59:04.444620] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:10.280 [2024-07-15 19:59:04.444718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85296 ] 00:21:10.539 [2024-07-15 19:59:04.585605] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.539 [2024-07-15 19:59:04.712094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.539 [2024-07-15 19:59:04.772328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:11.537 19:59:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.537 19:59:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:11.537 19:59:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:11.537 19:59:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.537 19:59:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:11.537 [2024-07-15 19:59:05.447443] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.537 null0 00:21:11.537 [2024-07-15 19:59:05.479400] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.537 [2024-07-15 19:59:05.479717] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:11.537 [2024-07-15 19:59:05.487387] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:11.537 19:59:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.537 19:59:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:11.537 19:59:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:11.537 19:59:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:11.537 19:59:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:11.538 [2024-07-15 19:59:05.499385] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:11.538 request: 00:21:11.538 { 00:21:11.538 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:11.538 "secure_channel": false, 00:21:11.538 "listen_address": { 00:21:11.538 "trtype": "tcp", 00:21:11.538 "traddr": "127.0.0.1", 00:21:11.538 "trsvcid": "4420" 00:21:11.538 }, 00:21:11.538 "method": "nvmf_subsystem_add_listener", 00:21:11.538 "req_id": 1 00:21:11.538 } 00:21:11.538 Got JSON-RPC error response 00:21:11.538 response: 00:21:11.538 { 00:21:11.538 "code": -32602, 00:21:11.538 "message": "Invalid parameters" 00:21:11.538 } 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:11.538 19:59:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=85313 00:21:11.538 19:59:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85313 /var/tmp/bperf.sock 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85313 ']' 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.538 19:59:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:11.538 19:59:05 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:11.538 [2024-07-15 19:59:05.565869] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:11.538 [2024-07-15 19:59:05.565986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85313 ] 00:21:11.538 [2024-07-15 19:59:05.708065] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.796 [2024-07-15 19:59:05.826023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.796 [2024-07-15 19:59:05.884994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:12.364 19:59:06 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.364 19:59:06 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:12.364 19:59:06 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VqIavnpRtW 00:21:12.364 19:59:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VqIavnpRtW 00:21:12.623 19:59:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Hl7c88wKBx 00:21:12.623 19:59:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Hl7c88wKBx 00:21:12.883 19:59:07 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:12.883 19:59:07 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:12.883 19:59:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.883 19:59:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:12.883 19:59:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.142 19:59:07 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.VqIavnpRtW == \/\t\m\p\/\t\m\p\.\V\q\I\a\v\n\p\R\t\W ]] 00:21:13.142 19:59:07 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:13.142 19:59:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:13.142 19:59:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:13.142 19:59:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.142 19:59:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.401 19:59:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Hl7c88wKBx == \/\t\m\p\/\t\m\p\.\H\l\7\c\8\8\w\K\B\x ]] 00:21:13.401 19:59:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:13.401 19:59:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:13.401 19:59:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:13.401 19:59:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.401 19:59:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:13.401 19:59:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.659 19:59:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:13.659 19:59:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:13.659 19:59:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:13.659 19:59:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:13.659 19:59:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:13.659 19:59:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.659 19:59:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.917 19:59:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:13.917 19:59:08 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.917 19:59:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:14.175 [2024-07-15 19:59:08.383172] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.433 nvme0n1 00:21:14.433 19:59:08 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:14.433 19:59:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:14.433 19:59:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:14.433 19:59:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:14.433 19:59:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:14.433 19:59:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:14.689 19:59:08 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:14.690 19:59:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:14.690 19:59:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:14.690 19:59:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:14.690 19:59:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:14.690 19:59:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:14.690 19:59:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:14.946 19:59:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:14.946 19:59:08 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:14.946 Running I/O for 1 seconds... 00:21:16.321 00:21:16.321 Latency(us) 00:21:16.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.321 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:16.321 nvme0n1 : 1.05 11184.74 43.69 0.00 0.00 11062.47 5391.83 48615.80 00:21:16.321 =================================================================================================================== 00:21:16.321 Total : 11184.74 43.69 0.00 0.00 11062.47 5391.83 48615.80 00:21:16.321 0 00:21:16.321 19:59:10 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:16.321 19:59:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:16.321 19:59:10 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:16.321 19:59:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:16.321 19:59:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:16.321 19:59:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:16.321 19:59:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:16.321 19:59:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.580 19:59:10 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:16.580 19:59:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:16.580 19:59:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:16.580 19:59:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:16.580 19:59:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:16.580 19:59:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:16.580 19:59:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.839 19:59:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:16.839 19:59:10 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:16.839 19:59:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:16.839 19:59:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:16.839 19:59:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:16.839 19:59:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.839 19:59:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:16.839 19:59:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.839 19:59:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:16.839 19:59:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:17.096 [2024-07-15 19:59:11.254693] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:17.096 [2024-07-15 19:59:11.255298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadb710 (107): Transport endpoint is not connected 00:21:17.096 [2024-07-15 19:59:11.256287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadb710 (9): Bad file descriptor 00:21:17.096 [2024-07-15 19:59:11.257301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:17.096 [2024-07-15 19:59:11.257338] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:17.096 [2024-07-15 19:59:11.257350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:17.096 request: 00:21:17.096 { 00:21:17.096 "name": "nvme0", 00:21:17.096 "trtype": "tcp", 00:21:17.096 "traddr": "127.0.0.1", 00:21:17.096 "adrfam": "ipv4", 00:21:17.096 "trsvcid": "4420", 00:21:17.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:17.096 "prchk_reftag": false, 00:21:17.096 "prchk_guard": false, 00:21:17.096 "hdgst": false, 00:21:17.096 "ddgst": false, 00:21:17.096 "psk": "key1", 00:21:17.096 "method": "bdev_nvme_attach_controller", 00:21:17.096 "req_id": 1 00:21:17.096 } 00:21:17.096 Got JSON-RPC error response 00:21:17.096 response: 00:21:17.096 { 00:21:17.096 "code": -5, 00:21:17.096 "message": "Input/output error" 00:21:17.096 } 00:21:17.096 19:59:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:17.096 19:59:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.096 19:59:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.096 19:59:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.096 19:59:11 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:17.096 19:59:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:17.096 19:59:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:17.096 19:59:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:17.096 19:59:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:17.096 19:59:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:17.353 19:59:11 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:17.353 19:59:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:17.353 19:59:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:17.353 19:59:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:17.353 19:59:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:17.353 19:59:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:17.353 19:59:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:17.611 19:59:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:17.611 19:59:11 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:17.611 19:59:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:17.870 19:59:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:17.870 19:59:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:18.127 19:59:12 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:18.127 19:59:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.127 19:59:12 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:18.385 19:59:12 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:18.385 19:59:12 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.VqIavnpRtW 00:21:18.385 19:59:12 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VqIavnpRtW 00:21:18.385 19:59:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:18.385 19:59:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VqIavnpRtW 00:21:18.385 19:59:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:18.385 19:59:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.385 19:59:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:18.385 19:59:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.385 19:59:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VqIavnpRtW 00:21:18.385 19:59:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VqIavnpRtW 00:21:18.697 [2024-07-15 19:59:12.842646] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VqIavnpRtW': 0100660 00:21:18.697 [2024-07-15 19:59:12.842692] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:18.697 request: 00:21:18.697 { 00:21:18.697 "name": "key0", 00:21:18.697 "path": "/tmp/tmp.VqIavnpRtW", 00:21:18.697 "method": "keyring_file_add_key", 00:21:18.697 "req_id": 1 00:21:18.697 } 00:21:18.697 Got JSON-RPC error response 00:21:18.697 response: 00:21:18.697 { 00:21:18.697 "code": -1, 00:21:18.697 "message": "Operation not permitted" 00:21:18.697 } 00:21:18.697 19:59:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:18.697 19:59:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:18.697 19:59:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:18.697 19:59:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:18.697 19:59:12 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.VqIavnpRtW 00:21:18.697 19:59:12 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VqIavnpRtW 00:21:18.697 19:59:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VqIavnpRtW 00:21:18.957 19:59:13 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.VqIavnpRtW 00:21:18.957 19:59:13 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:18.957 19:59:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:18.957 19:59:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:18.957 19:59:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:18.957 19:59:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.957 19:59:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:19.215 19:59:13 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:19.215 19:59:13 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:19.215 19:59:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:19.215 19:59:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:19.215 19:59:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:19.215 19:59:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.215 19:59:13 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:19.215 19:59:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.215 19:59:13 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:19.215 19:59:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:19.473 [2024-07-15 19:59:13.686843] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VqIavnpRtW': No such file or directory 00:21:19.473 [2024-07-15 19:59:13.686890] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:19.473 [2024-07-15 19:59:13.686919] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:19.473 [2024-07-15 19:59:13.686936] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:19.473 [2024-07-15 19:59:13.686945] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:19.473 request: 00:21:19.473 { 00:21:19.473 "name": "nvme0", 00:21:19.473 "trtype": "tcp", 00:21:19.473 "traddr": "127.0.0.1", 00:21:19.473 "adrfam": "ipv4", 00:21:19.473 "trsvcid": "4420", 00:21:19.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:19.473 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:19.473 "prchk_reftag": false, 00:21:19.473 "prchk_guard": false, 00:21:19.473 "hdgst": false, 00:21:19.473 "ddgst": false, 00:21:19.473 "psk": "key0", 00:21:19.473 "method": "bdev_nvme_attach_controller", 00:21:19.473 "req_id": 1 00:21:19.473 } 00:21:19.473 Got JSON-RPC error response 00:21:19.473 response: 00:21:19.473 { 00:21:19.473 "code": -19, 00:21:19.473 "message": "No such device" 00:21:19.473 } 00:21:19.473 19:59:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:19.473 19:59:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:19.473 19:59:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:19.473 19:59:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:19.473 19:59:13 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:19.473 19:59:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:20.039 19:59:14 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WHN7aHNoZ1 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:20.039 19:59:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:20.039 19:59:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:20.039 19:59:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:20.039 19:59:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:20.039 19:59:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:20.039 19:59:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WHN7aHNoZ1 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WHN7aHNoZ1 00:21:20.039 19:59:14 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.WHN7aHNoZ1 00:21:20.039 19:59:14 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WHN7aHNoZ1 00:21:20.039 19:59:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WHN7aHNoZ1 00:21:20.297 19:59:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:20.297 19:59:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:20.557 nvme0n1 00:21:20.557 19:59:14 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:20.557 19:59:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:20.557 19:59:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:20.557 19:59:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:20.557 19:59:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:20.557 19:59:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:20.816 19:59:14 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:20.816 19:59:14 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:20.816 19:59:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:21.074 19:59:15 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:21.074 19:59:15 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:21.074 19:59:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:21.074 19:59:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:21.074 19:59:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:21.333 19:59:15 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:21.333 19:59:15 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:21.333 19:59:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:21.333 19:59:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:21.333 19:59:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:21.333 19:59:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:21.333 19:59:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:21.590 19:59:15 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:21.590 19:59:15 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:21.590 19:59:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:21.847 19:59:16 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:21.847 19:59:16 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:21.847 19:59:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:22.105 19:59:16 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:22.105 19:59:16 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WHN7aHNoZ1 00:21:22.105 19:59:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WHN7aHNoZ1 00:21:22.673 19:59:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Hl7c88wKBx 00:21:22.673 19:59:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Hl7c88wKBx 00:21:22.673 19:59:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:22.673 19:59:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:23.239 nvme0n1 00:21:23.239 19:59:17 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:23.239 19:59:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:23.499 19:59:17 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:23.499 "subsystems": [ 00:21:23.499 { 00:21:23.499 "subsystem": "keyring", 00:21:23.499 "config": [ 00:21:23.499 { 00:21:23.499 "method": "keyring_file_add_key", 00:21:23.499 "params": { 00:21:23.499 "name": "key0", 00:21:23.499 "path": "/tmp/tmp.WHN7aHNoZ1" 00:21:23.499 } 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "method": "keyring_file_add_key", 00:21:23.499 "params": { 00:21:23.499 "name": "key1", 00:21:23.499 "path": "/tmp/tmp.Hl7c88wKBx" 00:21:23.499 } 00:21:23.499 } 00:21:23.499 ] 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "subsystem": "iobuf", 00:21:23.499 "config": [ 00:21:23.499 { 00:21:23.499 "method": "iobuf_set_options", 00:21:23.499 "params": { 00:21:23.499 "small_pool_count": 8192, 00:21:23.499 "large_pool_count": 1024, 00:21:23.499 "small_bufsize": 8192, 00:21:23.499 "large_bufsize": 135168 00:21:23.499 } 00:21:23.499 } 00:21:23.499 ] 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "subsystem": "sock", 00:21:23.499 "config": [ 00:21:23.499 { 00:21:23.499 "method": "sock_set_default_impl", 00:21:23.499 "params": { 00:21:23.499 "impl_name": "uring" 00:21:23.499 } 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "method": "sock_impl_set_options", 00:21:23.499 "params": { 00:21:23.499 "impl_name": "ssl", 00:21:23.499 "recv_buf_size": 4096, 00:21:23.499 "send_buf_size": 4096, 00:21:23.499 "enable_recv_pipe": true, 00:21:23.499 "enable_quickack": false, 00:21:23.499 "enable_placement_id": 0, 00:21:23.499 "enable_zerocopy_send_server": true, 00:21:23.499 "enable_zerocopy_send_client": false, 00:21:23.499 "zerocopy_threshold": 0, 00:21:23.499 "tls_version": 0, 00:21:23.499 "enable_ktls": false 00:21:23.499 } 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "method": "sock_impl_set_options", 00:21:23.499 "params": { 00:21:23.499 "impl_name": "posix", 00:21:23.499 "recv_buf_size": 2097152, 00:21:23.499 "send_buf_size": 2097152, 00:21:23.499 "enable_recv_pipe": true, 00:21:23.499 "enable_quickack": false, 00:21:23.499 "enable_placement_id": 0, 00:21:23.499 "enable_zerocopy_send_server": true, 00:21:23.499 "enable_zerocopy_send_client": false, 00:21:23.499 "zerocopy_threshold": 0, 00:21:23.499 "tls_version": 0, 00:21:23.499 "enable_ktls": false 00:21:23.499 } 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "method": "sock_impl_set_options", 00:21:23.499 "params": { 00:21:23.499 "impl_name": "uring", 00:21:23.499 "recv_buf_size": 2097152, 00:21:23.499 "send_buf_size": 2097152, 00:21:23.499 "enable_recv_pipe": true, 00:21:23.499 "enable_quickack": false, 00:21:23.499 "enable_placement_id": 0, 00:21:23.499 "enable_zerocopy_send_server": false, 00:21:23.499 "enable_zerocopy_send_client": false, 00:21:23.499 "zerocopy_threshold": 0, 00:21:23.499 "tls_version": 0, 00:21:23.499 "enable_ktls": false 00:21:23.499 } 00:21:23.499 } 00:21:23.499 ] 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "subsystem": "vmd", 00:21:23.499 "config": [] 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "subsystem": "accel", 00:21:23.499 "config": [ 00:21:23.499 { 00:21:23.499 "method": "accel_set_options", 00:21:23.499 "params": { 00:21:23.499 "small_cache_size": 128, 00:21:23.499 "large_cache_size": 16, 00:21:23.499 "task_count": 2048, 00:21:23.499 "sequence_count": 2048, 00:21:23.499 "buf_count": 2048 00:21:23.499 } 00:21:23.499 } 00:21:23.499 ] 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "subsystem": "bdev", 00:21:23.499 "config": [ 00:21:23.499 { 00:21:23.499 "method": "bdev_set_options", 00:21:23.499 "params": { 00:21:23.499 "bdev_io_pool_size": 65535, 00:21:23.499 "bdev_io_cache_size": 256, 00:21:23.499 "bdev_auto_examine": true, 00:21:23.499 "iobuf_small_cache_size": 128, 00:21:23.499 "iobuf_large_cache_size": 16 00:21:23.499 } 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "method": "bdev_raid_set_options", 00:21:23.499 "params": { 00:21:23.499 "process_window_size_kb": 1024 00:21:23.499 } 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "method": "bdev_iscsi_set_options", 00:21:23.499 "params": { 00:21:23.499 "timeout_sec": 30 00:21:23.499 } 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "method": "bdev_nvme_set_options", 00:21:23.499 "params": { 00:21:23.499 "action_on_timeout": "none", 00:21:23.499 "timeout_us": 0, 00:21:23.499 "timeout_admin_us": 0, 00:21:23.499 "keep_alive_timeout_ms": 10000, 00:21:23.499 "arbitration_burst": 0, 00:21:23.499 "low_priority_weight": 0, 00:21:23.499 "medium_priority_weight": 0, 00:21:23.499 "high_priority_weight": 0, 00:21:23.499 "nvme_adminq_poll_period_us": 10000, 00:21:23.499 "nvme_ioq_poll_period_us": 0, 00:21:23.499 "io_queue_requests": 512, 00:21:23.499 "delay_cmd_submit": true, 00:21:23.499 "transport_retry_count": 4, 00:21:23.499 "bdev_retry_count": 3, 00:21:23.499 "transport_ack_timeout": 0, 00:21:23.499 "ctrlr_loss_timeout_sec": 0, 00:21:23.499 "reconnect_delay_sec": 0, 00:21:23.499 "fast_io_fail_timeout_sec": 0, 00:21:23.499 "disable_auto_failback": false, 00:21:23.499 "generate_uuids": false, 00:21:23.499 "transport_tos": 0, 00:21:23.499 "nvme_error_stat": false, 00:21:23.499 "rdma_srq_size": 0, 00:21:23.499 "io_path_stat": false, 00:21:23.499 "allow_accel_sequence": false, 00:21:23.499 "rdma_max_cq_size": 0, 00:21:23.499 "rdma_cm_event_timeout_ms": 0, 00:21:23.499 "dhchap_digests": [ 00:21:23.499 "sha256", 00:21:23.499 "sha384", 00:21:23.499 "sha512" 00:21:23.499 ], 00:21:23.499 "dhchap_dhgroups": [ 00:21:23.499 "null", 00:21:23.499 "ffdhe2048", 00:21:23.499 "ffdhe3072", 00:21:23.499 "ffdhe4096", 00:21:23.499 "ffdhe6144", 00:21:23.499 "ffdhe8192" 00:21:23.499 ] 00:21:23.499 } 00:21:23.499 }, 00:21:23.499 { 00:21:23.499 "method": "bdev_nvme_attach_controller", 00:21:23.499 "params": { 00:21:23.499 "name": "nvme0", 00:21:23.499 "trtype": "TCP", 00:21:23.499 "adrfam": "IPv4", 00:21:23.499 "traddr": "127.0.0.1", 00:21:23.499 "trsvcid": "4420", 00:21:23.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.499 "prchk_reftag": false, 00:21:23.499 "prchk_guard": false, 00:21:23.499 "ctrlr_loss_timeout_sec": 0, 00:21:23.499 "reconnect_delay_sec": 0, 00:21:23.499 "fast_io_fail_timeout_sec": 0, 00:21:23.500 "psk": "key0", 00:21:23.500 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:23.500 "hdgst": false, 00:21:23.500 "ddgst": false 00:21:23.500 } 00:21:23.500 }, 00:21:23.500 { 00:21:23.500 "method": "bdev_nvme_set_hotplug", 00:21:23.500 "params": { 00:21:23.500 "period_us": 100000, 00:21:23.500 "enable": false 00:21:23.500 } 00:21:23.500 }, 00:21:23.500 { 00:21:23.500 "method": "bdev_wait_for_examine" 00:21:23.500 } 00:21:23.500 ] 00:21:23.500 }, 00:21:23.500 { 00:21:23.500 "subsystem": "nbd", 00:21:23.500 "config": [] 00:21:23.500 } 00:21:23.500 ] 00:21:23.500 }' 00:21:23.500 19:59:17 keyring_file -- keyring/file.sh@114 -- # killprocess 85313 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85313 ']' 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85313 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85313 00:21:23.500 killing process with pid 85313 00:21:23.500 Received shutdown signal, test time was about 1.000000 seconds 00:21:23.500 00:21:23.500 Latency(us) 00:21:23.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.500 =================================================================================================================== 00:21:23.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85313' 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@967 -- # kill 85313 00:21:23.500 19:59:17 keyring_file -- common/autotest_common.sh@972 -- # wait 85313 00:21:23.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:23.758 19:59:17 keyring_file -- keyring/file.sh@117 -- # bperfpid=85568 00:21:23.758 19:59:17 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85568 /var/tmp/bperf.sock 00:21:23.758 19:59:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85568 ']' 00:21:23.758 19:59:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:23.758 19:59:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.758 19:59:17 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:23.758 19:59:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:23.758 19:59:17 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:23.758 "subsystems": [ 00:21:23.758 { 00:21:23.758 "subsystem": "keyring", 00:21:23.758 "config": [ 00:21:23.758 { 00:21:23.758 "method": "keyring_file_add_key", 00:21:23.758 "params": { 00:21:23.758 "name": "key0", 00:21:23.758 "path": "/tmp/tmp.WHN7aHNoZ1" 00:21:23.758 } 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "method": "keyring_file_add_key", 00:21:23.758 "params": { 00:21:23.758 "name": "key1", 00:21:23.758 "path": "/tmp/tmp.Hl7c88wKBx" 00:21:23.758 } 00:21:23.758 } 00:21:23.758 ] 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "subsystem": "iobuf", 00:21:23.758 "config": [ 00:21:23.758 { 00:21:23.758 "method": "iobuf_set_options", 00:21:23.758 "params": { 00:21:23.758 "small_pool_count": 8192, 00:21:23.758 "large_pool_count": 1024, 00:21:23.758 "small_bufsize": 8192, 00:21:23.758 "large_bufsize": 135168 00:21:23.758 } 00:21:23.758 } 00:21:23.758 ] 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "subsystem": "sock", 00:21:23.758 "config": [ 00:21:23.758 { 00:21:23.758 "method": "sock_set_default_impl", 00:21:23.758 "params": { 00:21:23.758 "impl_name": "uring" 00:21:23.758 } 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "method": "sock_impl_set_options", 00:21:23.758 "params": { 00:21:23.758 "impl_name": "ssl", 00:21:23.758 "recv_buf_size": 4096, 00:21:23.758 "send_buf_size": 4096, 00:21:23.758 "enable_recv_pipe": true, 00:21:23.758 "enable_quickack": false, 00:21:23.758 "enable_placement_id": 0, 00:21:23.758 "enable_zerocopy_send_server": true, 00:21:23.758 "enable_zerocopy_send_client": false, 00:21:23.758 "zerocopy_threshold": 0, 00:21:23.758 "tls_version": 0, 00:21:23.758 "enable_ktls": false 00:21:23.758 } 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "method": "sock_impl_set_options", 00:21:23.758 "params": { 00:21:23.758 "impl_name": "posix", 00:21:23.758 "recv_buf_size": 2097152, 00:21:23.758 "send_buf_size": 2097152, 00:21:23.758 "enable_recv_pipe": true, 00:21:23.758 "enable_quickack": false, 00:21:23.758 "enable_placement_id": 0, 00:21:23.758 "enable_zerocopy_send_server": true, 00:21:23.758 "enable_zerocopy_send_client": false, 00:21:23.758 "zerocopy_threshold": 0, 00:21:23.758 "tls_version": 0, 00:21:23.758 "enable_ktls": false 00:21:23.758 } 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "method": "sock_impl_set_options", 00:21:23.758 "params": { 00:21:23.758 "impl_name": "uring", 00:21:23.758 "recv_buf_size": 2097152, 00:21:23.758 "send_buf_size": 2097152, 00:21:23.758 "enable_recv_pipe": true, 00:21:23.758 "enable_quickack": false, 00:21:23.758 "enable_placement_id": 0, 00:21:23.758 "enable_zerocopy_send_server": false, 00:21:23.758 "enable_zerocopy_send_client": false, 00:21:23.758 "zerocopy_threshold": 0, 00:21:23.758 "tls_version": 0, 00:21:23.758 "enable_ktls": false 00:21:23.758 } 00:21:23.758 } 00:21:23.758 ] 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "subsystem": "vmd", 00:21:23.758 "config": [] 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "subsystem": "accel", 00:21:23.758 "config": [ 00:21:23.758 { 00:21:23.758 "method": "accel_set_options", 00:21:23.758 "params": { 00:21:23.758 "small_cache_size": 128, 00:21:23.758 "large_cache_size": 16, 00:21:23.758 "task_count": 2048, 00:21:23.758 "sequence_count": 2048, 00:21:23.758 "buf_count": 2048 00:21:23.758 } 00:21:23.758 } 00:21:23.758 ] 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "subsystem": "bdev", 00:21:23.758 "config": [ 00:21:23.758 { 00:21:23.758 "method": "bdev_set_options", 00:21:23.758 "params": { 00:21:23.758 "bdev_io_pool_size": 65535, 00:21:23.758 "bdev_io_cache_size": 256, 00:21:23.758 "bdev_auto_examine": true, 00:21:23.758 "iobuf_small_cache_size": 128, 00:21:23.758 "iobuf_large_cache_size": 16 00:21:23.758 } 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "method": "bdev_raid_set_options", 00:21:23.758 "params": { 00:21:23.758 "process_window_size_kb": 1024 00:21:23.758 } 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "method": "bdev_iscsi_set_options", 00:21:23.758 "params": { 00:21:23.758 "timeout_sec": 30 00:21:23.758 } 00:21:23.758 }, 00:21:23.758 { 00:21:23.758 "method": "bdev_nvme_set_options", 00:21:23.758 "params": { 00:21:23.758 "action_on_timeout": "none", 00:21:23.758 "timeout_us": 0, 00:21:23.758 "timeout_admin_us": 0, 00:21:23.758 "keep_alive_timeout_ms": 10000, 00:21:23.758 "arbitration_burst": 0, 00:21:23.758 "low_priority_weight": 0, 00:21:23.758 "medium_priority_weight": 0, 00:21:23.758 "high_priority_weight": 0, 00:21:23.758 "nvme_adminq_poll_period_us": 10000, 00:21:23.758 "nvme_ioq_poll_period_us": 0, 00:21:23.758 "io_queue_requests": 512, 00:21:23.758 "delay_cmd_submit": true, 00:21:23.758 "transport_retry_count": 4, 00:21:23.758 "bdev_retry_count": 3, 00:21:23.758 "transport_ack_timeout": 0, 00:21:23.758 "ctrlr_loss_timeout_sec": 0, 00:21:23.758 "reconnect_delay_sec": 0, 00:21:23.758 "fast_io_fail_timeout_sec": 0, 00:21:23.758 "disable_auto_failback": false, 00:21:23.758 "generate_uuids": false, 00:21:23.758 "transport_tos": 0, 00:21:23.758 "nvme_error_stat": false, 00:21:23.758 "rdma_srq_size": 0, 00:21:23.758 "io_path_stat": false, 00:21:23.758 "allow_accel_sequence": false, 00:21:23.758 "rdma_max_cq_size": 0, 00:21:23.758 "rdma_cm_event_timeout_ms": 0, 00:21:23.758 "dhchap_digests": [ 00:21:23.758 "sha256", 00:21:23.758 "sha384", 00:21:23.758 "sha512" 00:21:23.758 ], 00:21:23.758 "dhchap_dhgroups": [ 00:21:23.758 "null", 00:21:23.758 "ffdhe2048", 00:21:23.758 "ffdhe3072", 00:21:23.758 "ffdhe4096", 00:21:23.758 "ffdhe6144", 00:21:23.759 "ffdhe8192" 00:21:23.759 ] 00:21:23.759 } 00:21:23.759 }, 00:21:23.759 { 00:21:23.759 "method": "bdev_nvme_attach_controller", 00:21:23.759 "params": { 00:21:23.759 "name": "nvme0", 00:21:23.759 "trtype": "TCP", 00:21:23.759 "adrfam": "IPv4", 00:21:23.759 "traddr": "127.0.0.1", 00:21:23.759 "trsvcid": "4420", 00:21:23.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.759 "prchk_reftag": false, 00:21:23.759 "prchk_guard": false, 00:21:23.759 "ctrlr_loss_timeout_sec": 0, 00:21:23.759 "reconnect_delay_sec": 0, 00:21:23.759 "fast_io_fail_timeout_sec": 0, 00:21:23.759 "psk": "key0", 00:21:23.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:23.759 "hdgst": false, 00:21:23.759 "ddgst": false 00:21:23.759 } 00:21:23.759 }, 00:21:23.759 { 00:21:23.759 "method": "bdev_nvme_set_hotplug", 00:21:23.759 "params": { 00:21:23.759 "period_us": 100000, 00:21:23.759 "enable": false 00:21:23.759 } 00:21:23.759 }, 00:21:23.759 { 00:21:23.759 "method": "bdev_wait_for_examine" 00:21:23.759 } 00:21:23.759 ] 00:21:23.759 }, 00:21:23.759 { 00:21:23.759 "subsystem": "nbd", 00:21:23.759 "config": [] 00:21:23.759 } 00:21:23.759 ] 00:21:23.759 }' 00:21:23.759 19:59:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.759 19:59:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:23.759 [2024-07-15 19:59:17.853618] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:23.759 [2024-07-15 19:59:17.853719] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85568 ] 00:21:23.759 [2024-07-15 19:59:17.986607] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.017 [2024-07-15 19:59:18.095260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.017 [2024-07-15 19:59:18.229785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:24.280 [2024-07-15 19:59:18.283989] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.572 19:59:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.572 19:59:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:24.572 19:59:18 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:24.572 19:59:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.572 19:59:18 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:24.830 19:59:19 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:24.830 19:59:19 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:24.830 19:59:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:24.830 19:59:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:24.830 19:59:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.830 19:59:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:24.830 19:59:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:25.396 19:59:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:25.396 19:59:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:25.396 19:59:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:25.396 19:59:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:25.396 19:59:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:25.396 19:59:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:25.396 19:59:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:25.396 19:59:19 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:25.396 19:59:19 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:25.396 19:59:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:25.396 19:59:19 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:25.655 19:59:19 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:25.655 19:59:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:25.655 19:59:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.WHN7aHNoZ1 /tmp/tmp.Hl7c88wKBx 00:21:25.655 19:59:19 keyring_file -- keyring/file.sh@20 -- # killprocess 85568 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85568 ']' 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85568 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85568 00:21:25.655 killing process with pid 85568 00:21:25.655 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.655 00:21:25.655 Latency(us) 00:21:25.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.655 =================================================================================================================== 00:21:25.655 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85568' 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@967 -- # kill 85568 00:21:25.655 19:59:19 keyring_file -- common/autotest_common.sh@972 -- # wait 85568 00:21:25.913 19:59:20 keyring_file -- keyring/file.sh@21 -- # killprocess 85296 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85296 ']' 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85296 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85296 00:21:25.913 killing process with pid 85296 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85296' 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@967 -- # kill 85296 00:21:25.913 [2024-07-15 19:59:20.106669] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:25.913 19:59:20 keyring_file -- common/autotest_common.sh@972 -- # wait 85296 00:21:26.481 00:21:26.481 real 0m16.338s 00:21:26.481 user 0m40.763s 00:21:26.481 sys 0m3.175s 00:21:26.481 19:59:20 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:26.481 19:59:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:26.481 ************************************ 00:21:26.481 END TEST keyring_file 00:21:26.481 ************************************ 00:21:26.481 19:59:20 -- common/autotest_common.sh@1142 -- # return 0 00:21:26.481 19:59:20 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:21:26.481 19:59:20 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:26.481 19:59:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:26.481 19:59:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.481 19:59:20 -- common/autotest_common.sh@10 -- # set +x 00:21:26.481 ************************************ 00:21:26.481 START TEST keyring_linux 00:21:26.481 ************************************ 00:21:26.481 19:59:20 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:26.481 * Looking for test storage... 00:21:26.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:26.481 19:59:20 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:26.481 19:59:20 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7fce926-7bf5-4841-86b1-6d78480abc2c 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=f7fce926-7bf5-4841-86b1-6d78480abc2c 00:21:26.481 19:59:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:26.482 19:59:20 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.482 19:59:20 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.482 19:59:20 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.482 19:59:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.482 19:59:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.482 19:59:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.482 19:59:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:26.482 19:59:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:26.482 19:59:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:26.482 19:59:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:26.482 19:59:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:26.482 19:59:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:26.482 19:59:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:26.482 19:59:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:26.482 /tmp/:spdk-test:key0 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:26.482 19:59:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:26.482 19:59:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:26.482 19:59:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:26.740 19:59:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:26.740 /tmp/:spdk-test:key1 00:21:26.740 19:59:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:26.740 19:59:20 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:26.740 19:59:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85681 00:21:26.740 19:59:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85681 00:21:26.740 19:59:20 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85681 ']' 00:21:26.740 19:59:20 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.740 19:59:20 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.740 19:59:20 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.740 19:59:20 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.740 19:59:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:26.740 [2024-07-15 19:59:20.811944] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:26.740 [2024-07-15 19:59:20.812049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85681 ] 00:21:26.740 [2024-07-15 19:59:20.950581] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.998 [2024-07-15 19:59:21.062859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.998 [2024-07-15 19:59:21.116503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:27.563 19:59:21 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.563 19:59:21 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:27.563 19:59:21 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:27.563 19:59:21 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.563 19:59:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:27.563 [2024-07-15 19:59:21.735873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.563 null0 00:21:27.563 [2024-07-15 19:59:21.767832] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:27.563 [2024-07-15 19:59:21.768075] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:27.563 19:59:21 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.563 19:59:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:27.563 177027005 00:21:27.563 19:59:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:27.563 130127554 00:21:27.563 19:59:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85699 00:21:27.563 19:59:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85699 /var/tmp/bperf.sock 00:21:27.563 19:59:21 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:27.564 19:59:21 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85699 ']' 00:21:27.564 19:59:21 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:27.564 19:59:21 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.564 19:59:21 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:27.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:27.564 19:59:21 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.564 19:59:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:27.822 [2024-07-15 19:59:21.847519] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:27.822 [2024-07-15 19:59:21.847627] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85699 ] 00:21:27.822 [2024-07-15 19:59:21.988645] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.080 [2024-07-15 19:59:22.114684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.647 19:59:22 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.647 19:59:22 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:28.647 19:59:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:28.647 19:59:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:28.905 19:59:23 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:28.905 19:59:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:29.164 [2024-07-15 19:59:23.328752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:29.164 19:59:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:29.164 19:59:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:29.422 [2024-07-15 19:59:23.664398] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.680 nvme0n1 00:21:29.680 19:59:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:29.680 19:59:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:29.680 19:59:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:29.680 19:59:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:29.680 19:59:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:29.680 19:59:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.938 19:59:24 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:29.938 19:59:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:29.938 19:59:24 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:29.938 19:59:24 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:29.938 19:59:24 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:29.938 19:59:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.938 19:59:24 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:30.196 19:59:24 keyring_linux -- keyring/linux.sh@25 -- # sn=177027005 00:21:30.196 19:59:24 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:30.196 19:59:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:30.196 19:59:24 keyring_linux -- keyring/linux.sh@26 -- # [[ 177027005 == \1\7\7\0\2\7\0\0\5 ]] 00:21:30.196 19:59:24 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 177027005 00:21:30.196 19:59:24 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:30.196 19:59:24 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:30.454 Running I/O for 1 seconds... 00:21:31.389 00:21:31.389 Latency(us) 00:21:31.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.389 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:31.389 nvme0n1 : 1.01 10599.77 41.41 0.00 0.00 11996.41 3336.38 13702.98 00:21:31.389 =================================================================================================================== 00:21:31.389 Total : 10599.77 41.41 0.00 0.00 11996.41 3336.38 13702.98 00:21:31.389 0 00:21:31.389 19:59:25 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:31.389 19:59:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:31.647 19:59:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:31.647 19:59:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:31.647 19:59:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:31.647 19:59:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:31.647 19:59:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.647 19:59:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:31.904 19:59:26 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:31.905 19:59:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:31.905 19:59:26 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:31.905 19:59:26 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:31.905 19:59:26 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:21:31.905 19:59:26 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:31.905 19:59:26 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:31.905 19:59:26 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.905 19:59:26 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:31.905 19:59:26 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.905 19:59:26 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:31.905 19:59:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:32.163 [2024-07-15 19:59:26.332566] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:32.163 [2024-07-15 19:59:26.333077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe41e50 (107): Transport endpoint is not connected 00:21:32.163 [2024-07-15 19:59:26.334064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe41e50 (9): Bad file descriptor 00:21:32.163 [2024-07-15 19:59:26.335060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:32.163 [2024-07-15 19:59:26.335084] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:32.163 [2024-07-15 19:59:26.335096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:32.163 request: 00:21:32.163 { 00:21:32.163 "name": "nvme0", 00:21:32.163 "trtype": "tcp", 00:21:32.163 "traddr": "127.0.0.1", 00:21:32.163 "adrfam": "ipv4", 00:21:32.163 "trsvcid": "4420", 00:21:32.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:32.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:32.163 "prchk_reftag": false, 00:21:32.163 "prchk_guard": false, 00:21:32.163 "hdgst": false, 00:21:32.163 "ddgst": false, 00:21:32.163 "psk": ":spdk-test:key1", 00:21:32.163 "method": "bdev_nvme_attach_controller", 00:21:32.163 "req_id": 1 00:21:32.163 } 00:21:32.163 Got JSON-RPC error response 00:21:32.163 response: 00:21:32.163 { 00:21:32.163 "code": -5, 00:21:32.163 "message": "Input/output error" 00:21:32.163 } 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@33 -- # sn=177027005 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 177027005 00:21:32.163 1 links removed 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@33 -- # sn=130127554 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 130127554 00:21:32.163 1 links removed 00:21:32.163 19:59:26 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85699 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85699 ']' 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85699 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85699 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:32.163 killing process with pid 85699 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85699' 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@967 -- # kill 85699 00:21:32.163 Received shutdown signal, test time was about 1.000000 seconds 00:21:32.163 00:21:32.163 Latency(us) 00:21:32.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.163 =================================================================================================================== 00:21:32.163 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.163 19:59:26 keyring_linux -- common/autotest_common.sh@972 -- # wait 85699 00:21:32.422 19:59:26 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85681 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85681 ']' 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85681 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85681 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:32.422 killing process with pid 85681 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85681' 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@967 -- # kill 85681 00:21:32.422 19:59:26 keyring_linux -- common/autotest_common.sh@972 -- # wait 85681 00:21:32.987 00:21:32.987 real 0m6.628s 00:21:32.987 user 0m12.819s 00:21:32.987 sys 0m1.629s 00:21:32.988 19:59:27 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:32.988 19:59:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:32.988 ************************************ 00:21:32.988 END TEST keyring_linux 00:21:32.988 ************************************ 00:21:32.988 19:59:27 -- common/autotest_common.sh@1142 -- # return 0 00:21:32.988 19:59:27 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:21:32.988 19:59:27 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:21:32.988 19:59:27 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:21:32.988 19:59:27 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:21:32.988 19:59:27 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:21:32.988 19:59:27 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:21:32.988 19:59:27 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:21:32.988 19:59:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:32.988 19:59:27 -- common/autotest_common.sh@10 -- # set +x 00:21:32.988 19:59:27 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:21:32.988 19:59:27 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:21:32.988 19:59:27 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:21:32.988 19:59:27 -- common/autotest_common.sh@10 -- # set +x 00:21:34.900 INFO: APP EXITING 00:21:34.900 INFO: killing all VMs 00:21:34.900 INFO: killing vhost app 00:21:34.900 INFO: EXIT DONE 00:21:35.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:35.415 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:35.415 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:35.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:35.983 Cleaning 00:21:35.983 Removing: /var/run/dpdk/spdk0/config 00:21:35.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:35.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:35.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:35.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:35.983 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:35.983 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:35.983 Removing: /var/run/dpdk/spdk1/config 00:21:35.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:35.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:35.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:35.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:35.983 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:35.983 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:35.983 Removing: /var/run/dpdk/spdk2/config 00:21:35.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:35.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:35.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:35.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:35.983 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:35.983 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:35.983 Removing: /var/run/dpdk/spdk3/config 00:21:35.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:36.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:36.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:36.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:36.271 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:36.271 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:36.271 Removing: /var/run/dpdk/spdk4/config 00:21:36.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:36.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:36.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:36.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:36.271 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:36.271 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:36.271 Removing: /dev/shm/nvmf_trace.0 00:21:36.271 Removing: /dev/shm/spdk_tgt_trace.pid58781 00:21:36.271 Removing: /var/run/dpdk/spdk0 00:21:36.271 Removing: /var/run/dpdk/spdk1 00:21:36.271 Removing: /var/run/dpdk/spdk2 00:21:36.271 Removing: /var/run/dpdk/spdk3 00:21:36.271 Removing: /var/run/dpdk/spdk4 00:21:36.271 Removing: /var/run/dpdk/spdk_pid58636 00:21:36.271 Removing: /var/run/dpdk/spdk_pid58781 00:21:36.271 Removing: /var/run/dpdk/spdk_pid58974 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59059 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59088 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59202 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59216 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59339 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59535 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59675 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59740 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59816 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59906 00:21:36.271 Removing: /var/run/dpdk/spdk_pid59979 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60017 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60047 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60109 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60208 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60641 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60693 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60744 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60760 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60827 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60843 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60910 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60926 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60972 00:21:36.271 Removing: /var/run/dpdk/spdk_pid60990 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61035 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61053 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61176 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61206 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61280 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61332 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61356 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61419 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61455 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61488 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61524 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61553 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61593 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61622 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61662 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61691 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61733 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61762 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61802 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61831 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61871 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61900 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61943 00:21:36.271 Removing: /var/run/dpdk/spdk_pid61978 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62015 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62053 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62087 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62123 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62187 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62280 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62588 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62601 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62637 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62656 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62666 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62690 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62704 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62725 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62744 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62763 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62773 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62803 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62811 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62832 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62851 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62870 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62880 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62905 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62918 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62939 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62970 00:21:36.271 Removing: /var/run/dpdk/spdk_pid62984 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63019 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63086 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63115 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63124 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63153 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63162 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63175 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63218 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63231 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63260 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63269 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63283 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63294 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63303 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63313 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63322 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63332 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63366 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63392 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63402 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63436 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63440 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63453 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63493 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63505 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63537 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63539 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63552 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63565 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63567 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63580 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63593 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63595 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63669 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63722 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63832 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63870 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63916 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63931 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63953 00:21:36.528 Removing: /var/run/dpdk/spdk_pid63973 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64004 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64020 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64090 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64111 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64155 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64227 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64282 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64309 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64398 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64445 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64479 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64703 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64800 00:21:36.528 Removing: /var/run/dpdk/spdk_pid64829 00:21:36.528 Removing: /var/run/dpdk/spdk_pid65153 00:21:36.528 Removing: /var/run/dpdk/spdk_pid65190 00:21:36.528 Removing: /var/run/dpdk/spdk_pid65482 00:21:36.528 Removing: /var/run/dpdk/spdk_pid65891 00:21:36.528 Removing: /var/run/dpdk/spdk_pid66166 00:21:36.528 Removing: /var/run/dpdk/spdk_pid66940 00:21:36.528 Removing: /var/run/dpdk/spdk_pid67765 00:21:36.528 Removing: /var/run/dpdk/spdk_pid67880 00:21:36.528 Removing: /var/run/dpdk/spdk_pid67949 00:21:36.528 Removing: /var/run/dpdk/spdk_pid69221 00:21:36.528 Removing: /var/run/dpdk/spdk_pid69428 00:21:36.528 Removing: /var/run/dpdk/spdk_pid72788 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73090 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73198 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73323 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73346 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73374 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73401 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73498 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73628 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73784 00:21:36.528 Removing: /var/run/dpdk/spdk_pid73860 00:21:36.528 Removing: /var/run/dpdk/spdk_pid74053 00:21:36.528 Removing: /var/run/dpdk/spdk_pid74142 00:21:36.528 Removing: /var/run/dpdk/spdk_pid74236 00:21:36.528 Removing: /var/run/dpdk/spdk_pid74538 00:21:36.528 Removing: /var/run/dpdk/spdk_pid74921 00:21:36.528 Removing: /var/run/dpdk/spdk_pid74923 00:21:36.528 Removing: /var/run/dpdk/spdk_pid75200 00:21:36.528 Removing: /var/run/dpdk/spdk_pid75214 00:21:36.528 Removing: /var/run/dpdk/spdk_pid75228 00:21:36.528 Removing: /var/run/dpdk/spdk_pid75259 00:21:36.528 Removing: /var/run/dpdk/spdk_pid75264 00:21:36.528 Removing: /var/run/dpdk/spdk_pid75566 00:21:36.528 Removing: /var/run/dpdk/spdk_pid75609 00:21:36.528 Removing: /var/run/dpdk/spdk_pid75886 00:21:36.786 Removing: /var/run/dpdk/spdk_pid76084 00:21:36.786 Removing: /var/run/dpdk/spdk_pid76465 00:21:36.786 Removing: /var/run/dpdk/spdk_pid76969 00:21:36.786 Removing: /var/run/dpdk/spdk_pid77776 00:21:36.786 Removing: /var/run/dpdk/spdk_pid78371 00:21:36.786 Removing: /var/run/dpdk/spdk_pid78373 00:21:36.786 Removing: /var/run/dpdk/spdk_pid80258 00:21:36.786 Removing: /var/run/dpdk/spdk_pid80324 00:21:36.786 Removing: /var/run/dpdk/spdk_pid80383 00:21:36.786 Removing: /var/run/dpdk/spdk_pid80439 00:21:36.786 Removing: /var/run/dpdk/spdk_pid80554 00:21:36.786 Removing: /var/run/dpdk/spdk_pid80609 00:21:36.786 Removing: /var/run/dpdk/spdk_pid80669 00:21:36.786 Removing: /var/run/dpdk/spdk_pid80730 00:21:36.786 Removing: /var/run/dpdk/spdk_pid81050 00:21:36.786 Removing: /var/run/dpdk/spdk_pid82197 00:21:36.786 Removing: /var/run/dpdk/spdk_pid82337 00:21:36.786 Removing: /var/run/dpdk/spdk_pid82580 00:21:36.786 Removing: /var/run/dpdk/spdk_pid83130 00:21:36.786 Removing: /var/run/dpdk/spdk_pid83289 00:21:36.786 Removing: /var/run/dpdk/spdk_pid83446 00:21:36.786 Removing: /var/run/dpdk/spdk_pid83543 00:21:36.786 Removing: /var/run/dpdk/spdk_pid83715 00:21:36.786 Removing: /var/run/dpdk/spdk_pid83824 00:21:36.786 Removing: /var/run/dpdk/spdk_pid84486 00:21:36.786 Removing: /var/run/dpdk/spdk_pid84516 00:21:36.786 Removing: /var/run/dpdk/spdk_pid84552 00:21:36.786 Removing: /var/run/dpdk/spdk_pid84805 00:21:36.786 Removing: /var/run/dpdk/spdk_pid84837 00:21:36.786 Removing: /var/run/dpdk/spdk_pid84873 00:21:36.786 Removing: /var/run/dpdk/spdk_pid85296 00:21:36.786 Removing: /var/run/dpdk/spdk_pid85313 00:21:36.786 Removing: /var/run/dpdk/spdk_pid85568 00:21:36.786 Removing: /var/run/dpdk/spdk_pid85681 00:21:36.786 Removing: /var/run/dpdk/spdk_pid85699 00:21:36.786 Clean 00:21:36.786 19:59:30 -- common/autotest_common.sh@1451 -- # return 0 00:21:36.786 19:59:30 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:21:36.786 19:59:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.786 19:59:30 -- common/autotest_common.sh@10 -- # set +x 00:21:36.786 19:59:30 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:21:36.786 19:59:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.786 19:59:30 -- common/autotest_common.sh@10 -- # set +x 00:21:36.786 19:59:31 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:36.786 19:59:31 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:36.786 19:59:31 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:36.786 19:59:31 -- spdk/autotest.sh@391 -- # hash lcov 00:21:36.786 19:59:31 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:37.044 19:59:31 -- spdk/autotest.sh@393 -- # hostname 00:21:37.044 19:59:31 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:37.044 geninfo: WARNING: invalid characters removed from testname! 00:22:03.603 19:59:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:06.131 20:00:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:09.483 20:00:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:12.009 20:00:05 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:14.593 20:00:08 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:17.121 20:00:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:20.401 20:00:14 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:20.401 20:00:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:20.401 20:00:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:20.401 20:00:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.401 20:00:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.401 20:00:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.401 20:00:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.401 20:00:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.401 20:00:14 -- paths/export.sh@5 -- $ export PATH 00:22:20.401 20:00:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.401 20:00:14 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:20.401 20:00:14 -- common/autobuild_common.sh@444 -- $ date +%s 00:22:20.401 20:00:14 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721073614.XXXXXX 00:22:20.401 20:00:14 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721073614.YSvqnw 00:22:20.401 20:00:14 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:22:20.401 20:00:14 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:22:20.401 20:00:14 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:20.401 20:00:14 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:20.401 20:00:14 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:20.401 20:00:14 -- common/autobuild_common.sh@460 -- $ get_config_params 00:22:20.401 20:00:14 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:20.401 20:00:14 -- common/autotest_common.sh@10 -- $ set +x 00:22:20.401 20:00:14 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:20.401 20:00:14 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:22:20.401 20:00:14 -- pm/common@17 -- $ local monitor 00:22:20.401 20:00:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:20.401 20:00:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:20.401 20:00:14 -- pm/common@25 -- $ sleep 1 00:22:20.401 20:00:14 -- pm/common@21 -- $ date +%s 00:22:20.401 20:00:14 -- pm/common@21 -- $ date +%s 00:22:20.401 20:00:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721073614 00:22:20.401 20:00:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721073614 00:22:20.401 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721073614_collect-vmstat.pm.log 00:22:20.401 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721073614_collect-cpu-load.pm.log 00:22:21.333 20:00:15 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:22:21.333 20:00:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:21.333 20:00:15 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:21.333 20:00:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:21.333 20:00:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:21.333 20:00:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:21.333 20:00:15 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:21.333 20:00:15 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:21.333 20:00:15 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:21.333 20:00:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:21.333 20:00:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:21.333 20:00:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:21.333 20:00:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:21.333 20:00:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:21.333 20:00:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:21.333 20:00:15 -- pm/common@44 -- $ pid=87455 00:22:21.333 20:00:15 -- pm/common@50 -- $ kill -TERM 87455 00:22:21.333 20:00:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:21.333 20:00:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:21.333 20:00:15 -- pm/common@44 -- $ pid=87456 00:22:21.333 20:00:15 -- pm/common@50 -- $ kill -TERM 87456 00:22:21.333 + [[ -n 5095 ]] 00:22:21.333 + sudo kill 5095 00:22:21.599 [Pipeline] } 00:22:21.617 [Pipeline] // timeout 00:22:21.623 [Pipeline] } 00:22:21.640 [Pipeline] // stage 00:22:21.646 [Pipeline] } 00:22:21.662 [Pipeline] // catchError 00:22:21.672 [Pipeline] stage 00:22:21.698 [Pipeline] { (Stop VM) 00:22:21.717 [Pipeline] sh 00:22:22.001 + vagrant halt 00:22:26.179 ==> default: Halting domain... 00:22:31.452 [Pipeline] sh 00:22:31.730 + vagrant destroy -f 00:22:35.016 ==> default: Removing domain... 00:22:35.285 [Pipeline] sh 00:22:35.565 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:35.573 [Pipeline] } 00:22:35.587 [Pipeline] // stage 00:22:35.592 [Pipeline] } 00:22:35.605 [Pipeline] // dir 00:22:35.611 [Pipeline] } 00:22:35.623 [Pipeline] // wrap 00:22:35.628 [Pipeline] } 00:22:35.639 [Pipeline] // catchError 00:22:35.646 [Pipeline] stage 00:22:35.648 [Pipeline] { (Epilogue) 00:22:35.660 [Pipeline] sh 00:22:35.936 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:42.541 [Pipeline] catchError 00:22:42.542 [Pipeline] { 00:22:42.555 [Pipeline] sh 00:22:42.834 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:42.834 Artifacts sizes are good 00:22:42.842 [Pipeline] } 00:22:42.858 [Pipeline] // catchError 00:22:42.868 [Pipeline] archiveArtifacts 00:22:42.874 Archiving artifacts 00:22:43.115 [Pipeline] cleanWs 00:22:43.124 [WS-CLEANUP] Deleting project workspace... 00:22:43.124 [WS-CLEANUP] Deferred wipeout is used... 00:22:43.129 [WS-CLEANUP] done 00:22:43.130 [Pipeline] } 00:22:43.141 [Pipeline] // stage 00:22:43.145 [Pipeline] } 00:22:43.154 [Pipeline] // node 00:22:43.158 [Pipeline] End of Pipeline 00:22:43.196 Finished: SUCCESS